NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Not Normal: the uncertainties of scientific measurements
NASA Astrophysics Data System (ADS)
Bailey, David C.
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.
Not Normal: the uncertainties of scientific measurements
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply. PMID:28280557
Uncertainty analysis of trade-offs between multiple responses using hypervolume
Cao, Yongtao; Lu, Lu; Anderson-Cook, Christine M.
2017-08-04
When multiple responses are considered in process optimization, the degree to which they can be simultaneously optimized depends on the optimization objectives and the amount of trade-offs between the responses. The normalized hypervolume of the Pareto front is a useful summary to quantify the amount of trade-offs required to balance performance across the multiple responses. In order to quantify the impact of uncertainty of the estimated response surfaces and add realism to what future data to expect, 2 versions of the scaled normalized hypervolume of the Pareto front are presented. To demonstrate the variation of the hypervolume distributions, we exploremore » a case study for a chemical process involving 3 responses, each with a different type of optimization goal. Our results show that the global normalized hypervolume characterizes the proximity to the ideal results possible, while the instance-specific summary considers the richness of the front and the severity of trade-offs between alternatives. Furthermore, the 2 scaling schemes complement each other and highlight different features of the Pareto front and hence are useful to quantify what solutions are possible for simultaneous optimization of multiple responses.« less
Uncertainty analysis of trade-offs between multiple responses using hypervolume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yongtao; Lu, Lu; Anderson-Cook, Christine M.
When multiple responses are considered in process optimization, the degree to which they can be simultaneously optimized depends on the optimization objectives and the amount of trade-offs between the responses. The normalized hypervolume of the Pareto front is a useful summary to quantify the amount of trade-offs required to balance performance across the multiple responses. In order to quantify the impact of uncertainty of the estimated response surfaces and add realism to what future data to expect, 2 versions of the scaled normalized hypervolume of the Pareto front are presented. To demonstrate the variation of the hypervolume distributions, we exploremore » a case study for a chemical process involving 3 responses, each with a different type of optimization goal. Our results show that the global normalized hypervolume characterizes the proximity to the ideal results possible, while the instance-specific summary considers the richness of the front and the severity of trade-offs between alternatives. Furthermore, the 2 scaling schemes complement each other and highlight different features of the Pareto front and hence are useful to quantify what solutions are possible for simultaneous optimization of multiple responses.« less
Simulation's Ensemble is Better Than Ensemble Simulation
NASA Astrophysics Data System (ADS)
Yan, X.
2017-12-01
Simulation's ensemble is better than ensemble simulation Yan Xiaodong State Key Laboratory of Earth Surface Processes and Resource Ecology (ESPRE) Beijing Normal University,19 Xinjiekouwai Street, Haidian District, Beijing 100875, China Email: yxd@bnu.edu.cnDynamical system is simulated from initial state. However initial state data is of great uncertainty, which leads to uncertainty of simulation. Therefore, multiple possible initial states based simulation has been used widely in atmospheric science, which has indeed been proved to be able to lower the uncertainty, that was named simulation's ensemble because multiple simulation results would be fused . In ecological field, individual based model simulation (forest gap models for example) can be regarded as simulation's ensemble compared with community based simulation (most ecosystem models). In this talk, we will address the advantage of individual based simulation and even their ensembles.
Control of thermal therapies with moving power deposition field.
Arora, Dhiraj; Minor, Mark A; Skliar, Mikhail; Roemer, Robert B
2006-03-07
A thermal therapy feedback control approach to control thermal dose using a moving power deposition field is developed and evaluated using simulations. A normal tissue safety objective is incorporated in the controller design by imposing constraints on temperature elevations at selected normal tissue locations. The proposed control technique consists of two stages. The first stage uses a model-based sliding mode controller that dynamically generates an 'ideal' power deposition profile which is generally unrealizable with available heating modalities. Subsequently, in order to approximately realize this spatially distributed idealized power deposition, a constrained quadratic optimizer is implemented to compute intensities and dwell times for a set of pre-selected power deposition fields created by a scanned focused transducer. The dwell times for various power deposition profiles are dynamically generated online as opposed to the commonly employed a priori-decided heating strategies. Dynamic intensity and trajectory generation safeguards the treatment outcome against modelling uncertainties and unknown disturbances. The controller is designed to enforce simultaneous activation of multiple normal tissue temperature constraints by rapidly switching between various power deposition profiles. The hypothesis behind the controller design is that the simultaneous activation of multiple constraints substantially reduces treatment time without compromising normal tissue safety. The controller performance and robustness with respect to parameter uncertainties is evaluated using simulations. The results demonstrate that the proposed controller can successfully deliver the desired thermal dose to the target while maintaining the temperatures at the user-specified normal tissue locations at or below the maximum allowable values. Although demonstrated for the case of a scanned focused ultrasound transducer, the developed approach can be extended to other heating modalities with moving deposition fields, such as external and interstitial ultrasound phased arrays, multiple radiofrequency needle applicators and microwave antennae.
Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration
NASA Technical Reports Server (NTRS)
Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas
1996-01-01
Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.
Hinton, Denise; Kirk, Susan
2017-06-01
Background There is growing recognition that multiple sclerosis is a possible, albeit uncommon, diagnosis in childhood. However, very little is known about the experiences of families living with childhood multiple sclerosis and this is the first study to explore this in depth. Objective Our objective was to explore the experiences of parents of children with multiple sclerosis. Methods Qualitative in-depth interviews with 31 parents using a grounded theory approach were conducted. Parents were sampled and recruited via health service and voluntary sector organisations in the United Kingdom. Results Parents' accounts of life with childhood multiple sclerosis were dominated by feelings of uncertainty associated with four sources; diagnostic uncertainty, daily uncertainty, interaction uncertainty and future uncertainty. Parents attempted to manage these uncertainties using specific strategies, which could in turn create further uncertainties about their child's illness. However, over time, ongoing uncertainty appeared to give parents hope for their child's future with multiple sclerosis. Conclusion Illness-related uncertainties appear to play a role in generating hope among parents of a child with multiple sclerosis. However, this may lead parents to avoid sources of information and support that threatens their fragile optimism. Professionals need to be sensitive to the role hope plays in supporting parental coping with childhood multiple sclerosis.
Meija, Juris; Chartrand, Michelle M G
2018-01-01
Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.
NASA Astrophysics Data System (ADS)
Szatmári, Gábor; Pásztor, László
2016-04-01
Uncertainty is a general term expressing our imperfect knowledge in describing an environmental process and we are aware of it (Bárdossy and Fodor, 2004). Sampling, laboratory measurements, models and so on are subject to uncertainty. Effective quantification and visualization of uncertainty would be indispensable to stakeholders (e.g. policy makers, society). Soil related features and their spatial models should be stressfully targeted to uncertainty assessment because their inferences are further used in modelling and decision making process. The aim of our present study was to assess and effectively visualize the local uncertainty of the countrywide soil organic matter (SOM) spatial distribution model of Hungary using geostatistical tools and concepts. The Hungarian Soil Information and Monitoring System's SOM data (approximately 1,200 observations) and environmental related, spatially exhaustive secondary information (i.e. digital elevation model, climatic maps, MODIS satellite images and geological map) were used to model the countrywide SOM spatial distribution by regression kriging. It would be common to use the calculated estimation (or kriging) variance as a measure of uncertainty, however the normality and homoscedasticity hypotheses have to be refused according to our preliminary analysis on the data. Therefore, a normal score transformation and a sequential stochastic simulation approach was introduced to be able to model and assess the local uncertainty. Five hundred equally probable realizations (i.e. stochastic images) were generated. The number of the stochastic images is fairly enough to provide a model of uncertainty at each location, which is a complete description of uncertainty in geostatistics (Deutsch and Journel, 1998). Furthermore, these models can be applied e.g. to contour the probability of any events, which can be regarded as goal oriented digital soil maps and are of interest for agricultural management and decision making as well. A standardized measure of the local entropy was used to visualize uncertainty, where entropy values close to 1 correspond to high uncertainty, whilst values close to 0 correspond low uncertainty. The advantage of the usage of local entropy in this context is that it combines probabilities from multiple members into a single number for each location of the model. In conclusion, it is straightforward to use a sequential stochastic simulation approach to the assessment of uncertainty, when normality and homoscedasticity are violated. The visualization of uncertainty using the local entropy is effective and communicative to stakeholders because it represents the uncertainty through a single number within a [0, 1] scale. References: Bárdossy, Gy. & Fodor, J., 2004. Evaluation of Uncertainties and Risks in Geology. Springer-Verlag, Berlin Heidelberg. Deutsch, C.V. & Journel, A.G., 1998. GSLIB: geostatistical software library and user's guide. Oxford University Press, New York. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin
NASA Astrophysics Data System (ADS)
Wei, Y.; Tang, D.; Gao, H.; Ding, Y.
2015-12-01
Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).
Tao, Xiaofeng; Zhang, Bin; Shen, Guofu; Wensveen, Janice; Smith, Earl L.; Nishimoto, Shinji; Ohzawa, Izumi
2014-01-01
Experiencing different quality images in the two eyes soon after birth can cause amblyopia, a developmental vision disorder. Amblyopic humans show the reduced capacity for judging the relative position of a visual target in reference to nearby stimulus elements (position uncertainty) and often experience visual image distortion. Although abnormal pooling of local stimulus information by neurons beyond striate cortex (V1) is often suggested as a neural basis of these deficits, extrastriate neurons in the amblyopic brain have rarely been studied using microelectrode recording methods. The receptive field (RF) of neurons in visual area V2 in normal monkeys is made up of multiple subfields that are thought to reflect V1 inputs and are capable of encoding the spatial relationship between local stimulus features. We created primate models of anisometropic amblyopia and analyzed the RF subfield maps for multiple nearby V2 neurons of anesthetized monkeys by using dynamic two-dimensional noise stimuli and reverse correlation methods. Unlike in normal monkeys, the subfield maps of V2 neurons in amblyopic monkeys were severely disorganized: subfield maps showed higher heterogeneity within each neuron as well as across nearby neurons. Amblyopic V2 neurons exhibited robust binocular suppression and the strength of the suppression was positively correlated with the degree of hereogeneity and the severity of amblyopia in individual monkeys. Our results suggest that the disorganized subfield maps and robust binocular suppression of amblyopic V2 neurons are likely to adversely affect the higher stages of cortical processing resulting in position uncertainty and image distortion. PMID:25297110
Cronkite-Ratcliff, C.; Phelps, G.A.; Boucher, A.
2012-01-01
This report provides a proof-of-concept to demonstrate the potential application of multiple-point geostatistics for characterizing geologic heterogeneity and its effect on flow and transport simulation. The study presented in this report is the result of collaboration between the U.S. Geological Survey (USGS) and Stanford University. This collaboration focused on improving the characterization of alluvial deposits by incorporating prior knowledge of geologic structure and estimating the uncertainty of the modeled geologic units. In this study, geologic heterogeneity of alluvial units is characterized as a set of stochastic realizations, and uncertainty is indicated by variability in the results of flow and transport simulations for this set of realizations. This approach is tested on a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. Yucca Flat was chosen as a data source for this test case because it includes both complex geologic and hydrologic characteristics and also contains a substantial amount of both surface and subsurface geologic data. Multiple-point geostatistics is used to model geologic heterogeneity in the subsurface. A three-dimensional (3D) model of spatial variability is developed by integrating alluvial units mapped at the surface with vertical drill-hole data. The SNESIM (Single Normal Equation Simulation) algorithm is used to represent geologic heterogeneity stochastically by generating 20 realizations, each of which represents an equally probable geologic scenario. A 3D numerical model is used to simulate groundwater flow and contaminant transport for each realization, producing a distribution of flow and transport responses to the geologic heterogeneity. From this distribution of flow and transport responses, the frequency of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary.
NASA Astrophysics Data System (ADS)
Stankovskiy, Alexey; Çelik, Yurdunaz; Eynde, Gert Van den
2017-09-01
Perturbation of external neutron source can cause significant local power changes transformed into undesired safety-related events in an accelerator driven system. Therefore for the accurate design of MYRRHA sub-critical core it is important to evaluate the uncertainty of power responses caused by the uncertainties in nuclear reaction models describing the particle transport from primary proton energy down to the evaluated nuclear data table range. The calculations with a set of models resulted in quite low uncertainty on the local power caused by significant perturbation of primary neutron yield from proton interactions with lead and bismuth isotopes. The considered accidental event of prescribed proton beam shape loss causes drastic increase in local power but does not practically change the total core thermal power making this effect difficult to detect. In the same time the results demonstrate a correlation between perturbed local power responses in normal operation and misaligned beam conditions indicating that generation of covariance data for proton and neutron induced neutron multiplicities for lead and bismuth isotopes is needed to obtain reliable uncertainties for local power responses.
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Crisis Management for Biobanks.
Parry-Jones, Alison; Hansen, Jarle; Simeon-Dubach, Daniel; Bjugn, Roger
2017-06-01
All organizations are subject to risk and uncertainty. Adverse events may disrupt normal organizational activity and may even cause complete failure of business operations. Biorepositories are also at risk and there have been instances where multiple samples or entire collections have been destroyed. Biobank guidelines accordingly recommend the establishment of contingency plans to reduce risk to an acceptable level. In this review article, we will use general theory on risk management and illustrate how such principles can be used to establish a practical crisis management plan for any biobank organization.
Tao, Xiaofeng; Zhang, Bin; Shen, Guofu; Wensveen, Janice; Smith, Earl L; Nishimoto, Shinji; Ohzawa, Izumi; Chino, Yuzo M
2014-10-08
Experiencing different quality images in the two eyes soon after birth can cause amblyopia, a developmental vision disorder. Amblyopic humans show the reduced capacity for judging the relative position of a visual target in reference to nearby stimulus elements (position uncertainty) and often experience visual image distortion. Although abnormal pooling of local stimulus information by neurons beyond striate cortex (V1) is often suggested as a neural basis of these deficits, extrastriate neurons in the amblyopic brain have rarely been studied using microelectrode recording methods. The receptive field (RF) of neurons in visual area V2 in normal monkeys is made up of multiple subfields that are thought to reflect V1 inputs and are capable of encoding the spatial relationship between local stimulus features. We created primate models of anisometropic amblyopia and analyzed the RF subfield maps for multiple nearby V2 neurons of anesthetized monkeys by using dynamic two-dimensional noise stimuli and reverse correlation methods. Unlike in normal monkeys, the subfield maps of V2 neurons in amblyopic monkeys were severely disorganized: subfield maps showed higher heterogeneity within each neuron as well as across nearby neurons. Amblyopic V2 neurons exhibited robust binocular suppression and the strength of the suppression was positively correlated with the degree of hereogeneity and the severity of amblyopia in individual monkeys. Our results suggest that the disorganized subfield maps and robust binocular suppression of amblyopic V2 neurons are likely to adversely affect the higher stages of cortical processing resulting in position uncertainty and image distortion. Copyright © 2014 the authors 0270-6474/14/3413840-15$15.00/0.
NASA Astrophysics Data System (ADS)
Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas
2018-02-01
Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.
NASA Astrophysics Data System (ADS)
Chang, Seung Jin; Lee, Chun Ku; Shin, Yong-June; Park, Jin Bae
2016-12-01
A multiple chirp reflectometry system with a fault estimation process is proposed to obtain multiple resolution and to measure the degree of fault in a target cable. A multiple resolution algorithm has the ability to localize faults, regardless of fault location. The time delay information, which is derived from the normalized cross-correlation between the incident signal and bandpass filtered reflected signals, is converted to a fault location and cable length. The in-phase and quadrature components are obtained by lowpass filtering of the mixed signal of the incident signal and the reflected signal. Based on in-phase and quadrature components, the reflection coefficient is estimated by the proposed fault estimation process including the mixing and filtering procedure. Also, the measurement uncertainty for this experiment is analyzed according to the Guide to the Expression of Uncertainty in Measurement. To verify the performance of the proposed method, we conduct comparative experiments to detect and measure faults under different conditions. Considering the installation environment of the high voltage cable used in an actual vehicle, target cable length and fault position are designed. To simulate the degree of fault, the variety of termination impedance (10 Ω , 30 Ω , 50 Ω , and 1 \\text{k} Ω ) are used and estimated by the proposed method in this experiment. The proposed method demonstrates advantages in that it has multiple resolution to overcome the blind spot problem, and can assess the state of the fault.
Optimization of Ocean Color Algorithms: Application to Satellite Data Merging
NASA Technical Reports Server (NTRS)
Ritorena, Stephane; Siegel, David A.; Morel, Andre
2004-01-01
The objective of the program is to develop and validate a procedure for ocean color data merging, which is one of the major goals of the SIMBIOS project. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances L((sub wN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) It works with single or multiple data sources regardless of their specific bands; 2) It exploits band redundancies and band differences; 3) It accounts for uncertainties in the L((sub wN)(lambda)) data; 4) It provides uncertainty estimates for the retrieved variables.
Extreme risk assessment based on normalized historic loss data
NASA Astrophysics Data System (ADS)
Eichner, Jan
2017-04-01
Natural hazard risk assessment and risk management focuses on the expected loss magnitudes of rare and extreme events. Such large-scale loss events typically comprise all aspects of compound events and accumulate losses from multiple sectors (including knock-on effects). Utilizing Munich Re's NatCatSERVICE direct economic loss data, we beriefly recap a novel methodology of peril-specific loss data normalization which improves the stationarity properties of highly non-stationary historic loss data (due to socio-economic growth of assets prone to destructive forces), and perform extreme value analysis (peaks-over-threshold method) to come up with return level estimates of e.g. 100-yr loss event scenarios for various types of perils, globally or per continent, and discuss uncertainty in the results.
Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng
2015-03-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way for improving the precision of posterior time estimation. However, even if a huge amount of sequence data is analyzed, considerable uncertainty will persist in time estimates. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Carcioppolo, Nick; Yang, Fan; Yang, Qinghua
2016-09-01
Uncertainty is a central characteristic of many aspects of cancer prevention, screening, diagnosis, and treatment. Brashers's (2001) uncertainty management theory details the multifaceted nature of uncertainty and describes situations in which uncertainty can both positively and negatively affect health outcomes. The current study extends theory on uncertainty management by developing four scale measures of uncertainty preferences in the context of cancer. Two national surveys were conducted to validate the scales and assess convergent and concurrent validity. Results support the factor structure of each measure and provide general support across multiple validity assessments. These scales can advance research on uncertainty and cancer communication by providing researchers with measures that address multiple aspects of uncertainty management.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A
2014-09-01
Spatially referenced datasets arising from multiple sources are routinely combined to assess relationships among various outcomes and covariates. The geographical units associated with the data, such as the geographical coordinates or areal-level administrative units, are often spatially misaligned, that is, observed at different locations or aggregated over different geographical units. As a result, the covariate is often predicted at the locations where the response is observed. The method used to align disparate datasets must be accounted for when subsequently modeling the aligned data. Here we consider the case where kriging is used to align datasets in point-to-point and point-to-areal misalignment problems when the response variable is non-normally distributed. If the relationship is modeled using generalized linear models, the additional uncertainty induced from using the kriging mean as a covariate introduces a Berkson error structure. In this article, we develop a pseudo-penalized quasi-likelihood algorithm to account for the additional uncertainty when estimating regression parameters and associated measures of uncertainty. The method is applied to a point-to-point example assessing the relationship between low-birth weights and PM2.5 levels after the onset of the largest wildfire in Florida history, the Bugaboo scrub fire. A point-to-areal misalignment problem is presented where the relationship between asthma events in Florida's counties and PM2.5 levels after the onset of the fire is assessed. Finally, the method is evaluated using a simulation study. Our results indicate the method performs well in terms of coverage for 95% confidence intervals and naive methods that ignore the additional uncertainty tend to underestimate the variability associated with parameter estimates. The underestimation is most profound in Poisson regression models. © 2014, The International Biometric Society.
Low order models for uncertainty quantification in acoustic propagation problems
NASA Astrophysics Data System (ADS)
Millet, Christophe
2016-11-01
Long-range sound propagation problems are characterized by both a large number of length scales and a large number of normal modes. In the atmosphere, these modes are confined within waveguides causing the sound to propagate through multiple paths to the receiver. For uncertain atmospheres, the modes are described as random variables. Concise mathematical models and analysis reveal fundamental limitations in classical projection techniques due to different manifestations of the fact that modes that carry small variance can have important effects on the large variance modes. In the present study, we propose a systematic strategy for obtaining statistically accurate low order models. The normal modes are sorted in decreasing Sobol indices using asymptotic expansions, and the relevant modes are extracted using a modified iterative Krylov-based method. The statistics of acoustic signals are computed by decomposing the original pulse into a truncated sum of modal pulses that can be described by a stationary phase method. As the low-order acoustic model preserves the overall structure of waveforms under perturbations of the atmosphere, it can be applied to uncertainty quantification. The result of this study is a new algorithm which applies on the entire phase space of acoustic fields.
Uncertainty, culture and pathways to care in paediatric functional gastrointestinal disorders.
Fortin, Sylvie; Gauthier, Annie; Gomez, Liliana; Faure, Christophe; Bibeau, Gilles; Rasquin, Andrée
2013-01-01
This paper examines how children and families of diverse ethnic backgrounds perceive, understand and treat symptoms related to functional gastrointestinal disorders (FGIDs). It is questioned how different ways of dealing with medical uncertainty (symptoms, diagnosis) may influence treatment pathways. Semi-structured interviews were conducted with 43 children of 38 family groups of immigrant and non-immigrant backgrounds. The analysis takes into account (a) the perceived symptoms; (b) the meaning attributed to them; and (c) the actions taken to relieve them. The social and cultural contexts that permeate these symptoms, meanings and actions were also examined. It is found that, in light of diagnostic and therapeutic uncertainty, non-immigrant families are more likely to consult health professionals. Immigrant families more readily rely upon home remedies, family support and, for some, religious beliefs to temper the uncertainty linked to abdominal pain. Furthermore, non-immigrant children lead a greater quest for legitimacy of their pain at home while most immigrant families place stomach aches in the range of normality. Intracultural variations nuance these findings, as well as family dynamics. It is concluded that different courses of action and family dynamics reveal that uncertainty is dealt with in multiple ways. Family support, the network, and trust in a child's expression of distress are key elements in order to tolerate uncertainty. Lastly, the medical encounter is described as a space permeated with relational uncertainty given the different registers of expression inherent within a cosmopolitan milieu. Narrative practices being an essential dynamic of this encounter, it is questioned whether families' voices are equally heard in these clinical spaces.
Simulation Based Earthquake Forecasting with RSQSim
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.
2016-12-01
We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.
Sources of uncertainty in hydrological climate impact assessment: a cross-scale study
NASA Astrophysics Data System (ADS)
Hattermann, F. F.; Vetter, T.; Breuer, L.; Su, Buda; Daggupati, P.; Donnelly, C.; Fekete, B.; Flörke, F.; Gosling, S. N.; Hoffmann, P.; Liersch, S.; Masaki, Y.; Motovilov, Y.; Müller, C.; Samaniego, L.; Stacke, T.; Wada, Y.; Yang, T.; Krysnaova, V.
2018-01-01
Climate change impacts on water availability and hydrological extremes are major concerns as regards the Sustainable Development Goals. Impacts on hydrology are normally investigated as part of a modelling chain, in which climate projections from multiple climate models are used as inputs to multiple impact models, under different greenhouse gas emissions scenarios, which result in different amounts of global temperature rise. While the goal is generally to investigate the relevance of changes in climate for the water cycle, water resources or hydrological extremes, it is often the case that variations in other components of the model chain obscure the effect of climate scenario variation. This is particularly important when assessing the impacts of relatively lower magnitudes of global warming, such as those associated with the aspirational goals of the Paris Agreement. In our study, we use ANOVA (analyses of variance) to allocate and quantify the main sources of uncertainty in the hydrological impact modelling chain. In turn we determine the statistical significance of different sources of uncertainty. We achieve this by using a set of five climate models and up to 13 hydrological models, for nine large scale river basins across the globe, under four emissions scenarios. The impact variable we consider in our analysis is daily river discharge. We analyze overall water availability and flow regime, including seasonality, high flows and low flows. Scaling effects are investigated by separately looking at discharge generated by global and regional hydrological models respectively. Finally, we compare our results with other recently published studies. We find that small differences in global temperature rise associated with some emissions scenarios have mostly significant impacts on river discharge—however, climate model related uncertainty is so large that it obscures the sensitivity of the hydrological system.
The Mn-53-Cr-53 System in CAIs: An Update
NASA Technical Reports Server (NTRS)
Papanastassiou, D. A.; Wasserburg, G. J.; Bogdanovski, O.
2005-01-01
High precision techniques have been developed for the measurement of Cr isotopes on the Triton mass spectrometer, at JPL. It is clear that multiple Faraday cup, simultaneous ion collection may reduce the uncertainty of isotope ratios relative to single Faraday cup ion collection, by the elimination of uncertainties from ion beam instabilities (since ion beam intensities for single cup collection are interpolated in time to calculate isotope ratios), and due to a greatly increased data collection duty cycle, for simultaneous ion collection. Efforts to measure Cr by simultaneous ion collection have not been successful in the past. Determinations on Cr-50-54Cr, by simultaneous ion collection on the Finnigan/ MAT 262 instrument at Caltech, resulted in large variations in extrinsic precision, for normal Cr, of up to 1% in Cr-53/Cr-52 (data corrected for mass fractionation, using Cr-50/Cr-52).
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Rempel, Gwen R; Harrison, Margaret J; Williamson, Deanna L
2009-04-01
Developing technology affords children with complex congenitally malformed hearts a chance for survival. Parents gratefully pursue life-saving options on behalf of their children, despite the risks to the life of their child, and uncertainty about outcomes. Little is known about how mothers and fathers experience parenting a child whose new state as a survivor may include less than optimal developmental sequels. Our study involved multiple interactive interviews with 9 mothers and 7 fathers of infants and preschool children with hypoplastic left heart syndrome who had survived the Norwood surgical approach. Qualitative methodology included grounded theory methods of simultaneous collection and analysis of data, and we used open and selective coding of transcribed interviews. Parents used normalization in the context of uncertainty regarding the ongoing survival of their child. Parents described their underweight children as being on their own growth curve, and viewed their developmental progress, however delayed, as reason for celebration, as they had been prepared for their child to die. There is growing evidence that children with congenitally malformed hearts who require surgical intervention during the first year of life may experience developmental delay. The use of normalization by their parents may be effective in decreasing their worry regarding the uncertain future faced by their child, but may negatively affect the developmental progress of the child if they do not seek resources to assist development. Advice from paediatric specialists for parents to view their children as normal needs to be balanced with assistance for parents to access services to support optimal growth and development of their child.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
NASA Astrophysics Data System (ADS)
Zhang, Jun; Zhang, Yang; Yu, Chang-Shui
2015-06-01
The Heisenberg uncertainty principle shows that no one can specify the values of the non-commuting canonically conjugated variables simultaneously. However, the uncertainty relation is usually applied to two incompatible measurements. We present tighter bounds on both entropic uncertainty relation and information exclusion relation for multiple measurements in the presence of quantum memory. As applications, three incompatible measurements on Werner state and Horodecki’s bound entangled state are investigated in details.
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; Krammer, M.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Heracleous, N.; Keaveney, J.; Lowette, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Van Parijs, I.; Barria, P.; Brun, H.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Maerschalk, T.; Marinov, A.; Pernié, L.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Beernaert, K.; Benucci, L.; Cimmino, A.; Crucy, S.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; Mccartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva, S.; Sigamani, M.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Ceard, L.; Delaere, C.; Favart, D.; Forthomme, L.; Giammanco, A.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Mertens, A.; Musich, M.; Nuttens, C.; Perrini, L.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Beliy, N.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hamer, M.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; De Souza Santos, A.; Dogra, S.; Tomei, T. R. Fernandez Perez; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Leggat, D.; Plestina, R.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Micanovic, S.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M.; Abdelalim, A. A.; Awad, A.; Mahrous, A.; Radi, A.; Calpas, B.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Peltola, T.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Filipovic, N.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Merlin, J. A.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schulte, J. F.; Verlage, T.; Weber, H.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Papacz, P.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Hoehle, F.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behnke, O.; Behrens, U.; Borras, K.; Burgmeier, A.; Campbell, A.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Seitz, C.; Spannagel, S.; Trippkewitz, K. D.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Gonzalez, D.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Nowatschin, D.; Ott, J.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Rathjens, D.; Sander, C.; Scharf, C.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; Colombo, F.; De Boer, W.; Descroix, A.; Dierlamm, A.; Fink, S.; Frensch, F.; Friese, R.; Giffels, M.; Gilbert, A.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, T.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hazi, A.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Szillasi, Z.; Bartók, M.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Mal, P.; Mandal, K.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutta, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Jain, Sa.; Kole, G.; Kumar, S.; Mahakud, B.; Maity, M.; Majumder, G.; Mazumdar, K.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sarkar, T.; Sur, N.; Sutar, B.; Wickramage, N.; Chauhan, S.; Dube, S.; Kapoor, A.; Kothekar, K.; Sharma, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Abbiendi, G.; Battilana, C.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Cappello, G.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Bellato, M.; Benato, L.; Boletti, A.; Branca, A.; Dall'Osso, M.; Dorigo, T.; Fantinel, S.; Fanzago, F.; Gonella, F.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Montecassiano, F.; Passaseo, M.; Pazzini, J.; Pegoraro, M.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Ventura, S.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; D'imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Zanetti, A.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Sakharov, A.; Son, D. C.; Brochero Cifuentes, J. A.; Kim, H.; Kim, T. J.; Song, S.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Hong, B.; Kim, H.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Casimiro Linares, E.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Chadeeva, M.; Danilov, M.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Baskakov, A.; Belyaev, A.; Boos, E.; Ershov, A.; Gribushin, A.; Kaminskiy, A.; Kodolova, O.; Korotkikh, V.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Vardanyan, I.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Palencia Cortezon, E.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; De Castro Manzano, P.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Berruti, G. M.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Castello, R.; Cerminara, G.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Guio, F.; De Roeck, A.; De Visscher, S.; Di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kirschenmann, H.; Kortelainen, M. J.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Magini, N.; Malgeri, L.; Mannelli, M.; Martelli, A.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Nemallapudi, M. V.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Piparo, D.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Simon, M.; Sphicas, P.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Triossi, A.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegá, M.; Eller, P.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lecomte, P.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Takahashi, M.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Yang, Y.; Cardaci, M.; Chen, K. H.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Grundler, U.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Petrakou, E.; Tsai, J. f.; Tzeng, Y. M.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Cerci, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Gecit, F. H.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozcan, M.; Ozdemir, K.; Ozturk, S.; Tali, B.; Topakli, H.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Vardarlí, F. I.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Senkin, S.; Smith, D.; Smith, V. J.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Dunne, P.; Elwood, A.; Futyan, D.; Hall, G.; Iles, G.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Alimena, J.; Berry, E.; Cutts, D.; Ferapontov, A.; Garabedian, A.; Hakala, J.; Heintz, U.; Jesus, O.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Syarif, R.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Florent, A.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Paneva, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Derdzinski, M.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Incandela, J.; Mccoll, N.; Mullin, S. D.; Richman, J.; Stuart, D.; Suarez, I.; West, C.; Yoo, J.; Anderson, D.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Nauenberg, U.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Sun, W.; Tan, S. M.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Wittich, P.; Abdullin, S.; Albrow, M.; Apollinari, G.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Konigsberg, J.; Korytov, A.; Kotov, K.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Rossin, R.; Shchutska, L.; Snowball, M.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bein, S.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Kalakhety, H.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Turner, P.; Varelas, N.; Wu, Z.; Zakaria, M.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sady, A.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P.; Majumder, D.; Malek, M.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Wang, Q.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Lange, D.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Baty, A.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Dahmes, B.; Evans, A.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bartek, R.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Knowlton, D.; Kravchenko, I.; Meier, F.; Monroy, J.; Ratnikov, F.; Siado, J. E.; Snow, G. R.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Bhattacharya, S.; Hahn, K. A.; Kubik, A.; Low, J. F.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M.; Sung, K.; Trovato, M.; Velasco, M.; Brinkerhoff, A.; Dev, N.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Ling, T. Y.; Liu, B.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Zuranski, A.; Malik, S.; Barker, A.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Jung, K.; Kumar, A.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Petrillo, G.; Tan, P.; Verzetti, M.; Chou, J. P.; Contreras-Campana, E.; Ferencek, D.; Gershtein, Y.; Halkiadakis, E.; Heindl, M.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Lath, A.; Nash, K.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Krutelyov, V.; Mueller, R.; Osipenkov, I.; Pakhotin, Y.; Patel, R.; Perloff, A.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Undleeb, S.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Wood, J.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Sarangi, T.; Savin, A.; Sharma, A.; Smith, N.; Smith, W. H.; Taylor, D.; Verwilligen, P.; Woods, N.; CMS Collaboration
2017-07-01
Two-particle correlations in p Pb collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV are studied as a function of the pseudorapidity separation (Δ η ) of the particle pair at small relative azimuthal angle (|Δ ϕ |<π /3 ). The correlations are decomposed into a jet component that dominates the short-range correlations (|Δ η |<1 ), and a component that persists at large Δ η and may originate from collective behavior of the produced system. The events are classified in terms of the multiplicity of the produced particles. Finite azimuthal anisotropies are observed in high-multiplicity events. The second and third Fourier components of the particle-pair azimuthal correlations, V2 and V3, are extracted after subtraction of the jet component. The single-particle anisotropy parameters v2 and v3 are normalized by their laboratory frame midrapidity value and are studied as a function of ηc.m.. The normalized v2 distribution is found to be asymmetric about ηc.m.=0 , with smaller values observed at forward pseudorapidity, corresponding to the direction of the proton beam, while no significant pseudorapidity dependence is observed for the normalized v3 distribution within the statistical uncertainties.
Khachatryan, Vardan
2016-04-18
Two-particle correlations in pPb collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV are studied as a function of the pseudorapidity separation (Delta eta) of the particle pair at small relative azimuthal angle (abs(Delta phi)< pi/3). The correlations are decomposed into a jet component that dominates the short-range correlations (abs(Delta eta) < 1), and a component that persists at large Delta eta and may originate from collective behavior of the produced system. The events are classified in terms of the multiplicity of the produced particles. Finite azimuthal anisotropies are observed in high-multiplicity events. The second and third Fourier components ofmore » the particle-pair azimuthal correlations, V[2] and V[3], are extracted after subtraction of the jet component. The single-particle anisotropy parameters v[2] and v[3] are normalized by their lab frame mid-rapidity value and are studied as a function of eta[cm]. The normalized v[2] distribution is found to be asymmetric about eta[cm] = 0, with smaller values observed at forward pseudorapidity, corresponding to the direction of the proton beam, while no significant pseudorapidity dependence is observed for the normalized v[3] distribution within the statistical uncertainties.« less
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.
2009-05-01
Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.
Zhang, Jun; Zhang, Yang; Yu, Chang-shui
2015-01-01
The Heisenberg uncertainty principle shows that no one can specify the values of the non-commuting canonically conjugated variables simultaneously. However, the uncertainty relation is usually applied to two incompatible measurements. We present tighter bounds on both entropic uncertainty relation and information exclusion relation for multiple measurements in the presence of quantum memory. As applications, three incompatible measurements on Werner state and Horodecki’s bound entangled state are investigated in details. PMID:26118488
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints
Thompson, John R; Spata, Enti; Abrams, Keith R
2015-01-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.
Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R
2017-10-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
Reducing uncertainties in decadal variability of the global carbon budget with multiple datasets
Li, Wei; Ciais, Philippe; Wang, Yilong; Peng, Shushi; Broquet, Grégoire; Ballantyne, Ashley P.; Canadell, Josep G.; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Peters, Glen P.; Piao, Shilong; Pongratz, Julia
2016-01-01
Conventional calculations of the global carbon budget infer the land sink as a residual between emissions, atmospheric accumulation, and the ocean sink. Thus, the land sink accumulates the errors from the other flux terms and bears the largest uncertainty. Here, we present a Bayesian fusion approach that combines multiple observations in different carbon reservoirs to optimize the land (B) and ocean (O) carbon sinks, land use change emissions (L), and indirectly fossil fuel emissions (F) from 1980 to 2014. Compared with the conventional approach, Bayesian optimization decreases the uncertainties in B by 41% and in O by 46%. The L uncertainty decreases by 47%, whereas F uncertainty is marginally improved through the knowledge of natural fluxes. Both ocean and net land uptake (B + L) rates have positive trends of 29 ± 8 and 37 ± 17 Tg C⋅y−2 since 1980, respectively. Our Bayesian fusion of multiple observations reduces uncertainties, thereby allowing us to isolate important variability in global carbon cycle processes. PMID:27799533
Uncertainty in clinical data and stochastic model for in vitro fertilization.
Yenkie, Kirti M; Diwekar, Urmila
2015-02-21
In vitro fertilization (IVF) is the most widely used technique in assisted reproductive technologies (ART). It has been divided into four stages; (i) superovulation, (ii) egg retrieval, (iii) insemination/fertilization and (iv) embryo transfer. The first stage of superovulation is a drug induced method to enable multiple ovulation, i.e., multiple follicle growth to oocytes or matured follicles in a single menstrual cycle. IVF being a medical procedure that aims at manipulating the biological functions in the human body is subjected to inherent sources of uncertainty and variability. Also, the interplay of hormones with the natural functioning of the ovaries to stimulate multiple ovulation as against single ovulation in a normal menstrual cycle makes the procedure dependent on several factors like the patient's condition in terms of cause of infertility, actual ovarian function, responsiveness to the medication, etc. The treatment requires continuous monitoring and testing and this can give rise to errors in observations and reports. These uncertainties are present in the form of measurement noise in the clinical data. Thus, it becomes essential to look at the process noise and account for it to build better representative models for follicle growth. The purpose of this work is to come up with a robust model which can project the superovulation cycle outcome based on the hormonal doses and patient response in a better way in presence of uncertainty. The stochastic model results in better projection of the cycle outcomes for the patients where the deterministic model has some deviations from the clinical observations and the growth term value is not within the range of '0.3-0.6'. It was found that the prediction accuracy was enhanced by more than 70% for two patients by using the stochastic model projections. Also, in patients where the prediction accuracy did not increase significantly, a better match with the trend of the clinical data was observed in case of the stochastic model projections as compared to their deterministic counterparts. Copyright © 2014 Elsevier Ltd. All rights reserved.
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
Knoke, Thomas; Paul, Carola; Hildebrandt, Patrick; Calvas, Baltazar; Castro, Luz Maria; Härtl, Fabian; Döllerer, Martin; Hamer, Ute; Windhorst, David; Wiersma, Yolanda F.; Curatola Fernández, Giulia F.; Obermeier, Wolfgang A.; Adams, Julia; Breuer, Lutz; Mosandl, Reinhard; Beck, Erwin; Weber, Michael; Stimm, Bernd; Haber, Wolfgang; Fürst, Christine; Bendix, Jörg
2016-01-01
High landscape diversity is assumed to increase the number and level of ecosystem services. However, the interactions between ecosystem service provision, disturbance and landscape composition are poorly understood. Here we present a novel approach to include uncertainty in the optimization of land allocation for improving the provision of multiple ecosystem services. We refer to the rehabilitation of abandoned agricultural lands in Ecuador including two types of both afforestation and pasture rehabilitation, together with a succession option. Our results show that high compositional landscape diversity supports multiple ecosystem services (multifunction effect). This implicitly provides a buffer against uncertainty. Our work shows that active integration of uncertainty is only important when optimizing single or highly correlated ecosystem services and that the multifunction effect on landscape diversity is stronger than the uncertainty effect. This is an important insight to support a land-use planning based on ecosystem services. PMID:27292766
Knoke, Thomas; Paul, Carola; Hildebrandt, Patrick; Calvas, Baltazar; Castro, Luz Maria; Härtl, Fabian; Döllerer, Martin; Hamer, Ute; Windhorst, David; Wiersma, Yolanda F; Curatola Fernández, Giulia F; Obermeier, Wolfgang A; Adams, Julia; Breuer, Lutz; Mosandl, Reinhard; Beck, Erwin; Weber, Michael; Stimm, Bernd; Haber, Wolfgang; Fürst, Christine; Bendix, Jörg
2016-06-13
High landscape diversity is assumed to increase the number and level of ecosystem services. However, the interactions between ecosystem service provision, disturbance and landscape composition are poorly understood. Here we present a novel approach to include uncertainty in the optimization of land allocation for improving the provision of multiple ecosystem services. We refer to the rehabilitation of abandoned agricultural lands in Ecuador including two types of both afforestation and pasture rehabilitation, together with a succession option. Our results show that high compositional landscape diversity supports multiple ecosystem services (multifunction effect). This implicitly provides a buffer against uncertainty. Our work shows that active integration of uncertainty is only important when optimizing single or highly correlated ecosystem services and that the multifunction effect on landscape diversity is stronger than the uncertainty effect. This is an important insight to support a land-use planning based on ecosystem services.
NASA Astrophysics Data System (ADS)
Istiningrum, Reni Banowati; Saepuloh, Azis; Jannah, Wirdatul; Aji, Didit Waskito
2017-03-01
Yogyakarta is one of patchouli oil distillation center in Indonesia. The quality of patchouli oil greatly affect its market price. Therefore, testing quality of patchouli oil parameters is an important concern, one through determination of the measurement uncertainty. This study will determine the measurement uncertainty of ester number, acid number and content of patchouli alcohol through a bottom up approach. Source contributor to measurement uncertainty of ester number is a mass of the sample, a blank and sample titration volume, the molar mass of KOH, HCl normality, and replication. While the source contributor of the measurement uncertainty of acid number is the mass of the sample, the sample titration volume, the relative mass and normality of KOH, and repetition. Determination of patchouli alcohol by Gas Chromatography considers the sources of measurement uncertainty only from repeatability because reference materials are not available.
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdo, A. A.; Abeysekara, U.; Linnemann, J. T.
2012-07-10
The Cygnus region is a very bright and complex portion of the TeV sky, host to unidentified sources and a diffuse excess with respect to conventional cosmic-ray propagation models. Two of the brightest TeV sources, MGRO J2019+37 and MGRO J2031+41, are analyzed using Milagro data with a new technique, and their emission is tested under two different spectral assumptions: a power law and a power law with an exponential cutoff. The new analysis technique is based on an energy estimator that uses the fraction of photomultiplier tubes in the observatory that detect the extensive air shower. The photon spectrum ismore » measured in the range 1-100 TeV using the last three years of Milagro data (2005-2008), with the detector in its final configuration. An F-test indicates that MGRO J2019+37 is better fit by a power law with an exponential cutoff than by a simple power law. The best-fitting parameters for the power law with exponential cutoff model are a normalization at 10 TeV of 7{sup +5}{sub -2} Multiplication-Sign 10{sup -10} s{sup -1} m{sup -2} TeV{sup -1}, a spectral index of 2.0{sup +0.5}{sub -1.0}, and a cutoff energy of 29{sup +50}{sub -16} TeV. MGRO J2031+41 shows no evidence of a cutoff. The best-fitting parameters for a power law are a normalization of 2.1{sup +0.6}{sub -0.6} Multiplication-Sign 10{sup -10} s{sup -1} m{sup -2} TeV{sup -1} and a spectral index of 3.22{sup +0.23}{sub -0.18}. The overall flux is subject to a {approx}30% systematic uncertainty. The systematic uncertainty on the power-law indices is {approx}0.1. Both uncertainties have been verified with cosmic-ray data. A comparison with previous results from TeV J2032+4130, MGRO J2031+41, and MGRO J2019+37 is also presented.« less
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakos, James Thomas
2004-04-01
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accounting For Uncertainty in The Application Of High Throughput Datasets
The use of high throughput screening (HTS) datasets will need to adequately account for uncertainties in the data generation process and propagate these uncertainties through to ultimate use. Uncertainty arises at multiple levels in the construction of predictors using in vitro ...
Modeling Radioactive Decay Chains with Branching Fraction Uncertainties
2013-03-01
moments methods with transmutation matrices. Uncertainty from both half-lives and branching fractions is carried through these calculations by Monte...moment methods, method for sampling from normal distributions for half- life uncertainty, and use of transmutation matrices were leveraged. This...distributions for half-life and branching fraction uncertainties, building decay chains and generating the transmutation matrix (T-matrix
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R
2016-03-30
A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Stochastic Analysis and Probabilistic Downscaling of Soil Moisture
NASA Astrophysics Data System (ADS)
Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.
2017-12-01
Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.
NASA Astrophysics Data System (ADS)
Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn
2013-04-01
SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.
Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study
NASA Astrophysics Data System (ADS)
O'Neill, B. C.
2015-12-01
Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.
Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties
NASA Astrophysics Data System (ADS)
Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui
2017-10-01
In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.
Uncertainties of Mayak urine data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir
2008-01-01
For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less
Neutron multiplicity counting: Confidence intervals for reconstruction parameters
Verbeke, Jerome M.
2016-03-09
From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less
Bayesian networks improve causal environmental ...
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Application of Dynamic naïve Bayesian classifier to comprehensive drought assessment
NASA Astrophysics Data System (ADS)
Park, D. H.; Lee, J. Y.; Lee, J. H.; KIm, T. W.
2017-12-01
Drought monitoring has already been extensively studied due to the widespread impacts and complex causes of drought. The most important component of drought monitoring is to estimate the characteristics and extent of drought by quantitatively measuring the characteristics of drought. Drought assessment considering different aspects of the complicated drought condition and uncertainty of drought index is great significance in accurate drought monitoring. This study used the dynamic Naïve Bayesian Classifier (DNBC) which is an extension of the Hidden Markov Model (HMM), to model and classify drought by using various drought indices for integrated drought assessment. To provide a stable model for combined use of multiple drought indices, this study employed the DNBC to perform multi-index drought assessment by aggregating the effect of different type of drought and considering the inherent uncertainty. Drought classification was performed by the DNBC using several drought indices: Standardized Precipitation Index (SPI), Streamflow Drought Index (SDI), and Normalized Vegetation Supply Water Index (NVSWI)) that reflect meteorological, hydrological, and agricultural drought characteristics. Overall results showed that in comparison unidirectional (SPI, SDI, and NVSWI) or multivariate (Composite Drought Index, CDI) drought assessment, the proposed DNBC was able to synthetically classify of drought considering uncertainty. Model provided method for comprehensive drought assessment with combined use of different drought indices.
NASA Astrophysics Data System (ADS)
Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli
2018-04-01
Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.
A Practical Approach to Address Uncertainty in Stakeholder Deliberations.
Gregory, Robin; Keeney, Ralph L
2017-03-01
This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions. © 2017 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
Optimized production planning model for a multi-plant cultivation system under uncertainty
NASA Astrophysics Data System (ADS)
Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng
2015-02-01
An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.
Women Confronting the Reality of Multiple Sclerosis: A Qualitative Model of Self-Healing
ERIC Educational Resources Information Center
Romagosa, Carol J.
2010-01-01
Multiple sclerosis (MS) is a chronic debilitating disease that has an uncertain course. Although uncertainty is a universal experience in chronic illness, uncertainty in MS is especially threatening to psychological well-being. Chronic illness, including conditions of disability, is one of our greatest health care problems as society ages. Never…
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Lippa, Katrice A.
2016-10-01
Chemical purity assessment using quantitative 1H-nuclear magnetic resonance spectroscopy is a method based on ratio references of mass and signal intensity of the analyte species to that of chemical standards of known purity. As such, it is an example of a calculation using a known measurement equation with multiple inputs. Though multiple samples are often analyzed during purity evaluations in order to assess measurement repeatability, the uncertainty evaluation must also account for contributions from inputs to the measurement equation. Furthermore, there may be other uncertainty components inherent in the experimental design, such as independent implementation of multiple calibration standards. As such, the uncertainty evaluation is not purely bottom up (based on the measurement equation) or top down (based on the experimental design), but inherently contains elements of both. This hybrid form of uncertainty analysis is readily implemented with Bayesian statistical analysis. In this article we describe this type of analysis in detail and illustrate it using data from an evaluation of chemical purity and its uncertainty for a folic acid material.
Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong
2012-01-01
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694
Langenderfer, Joseph E; Rullkoetter, Paul J; Mell, Amy G; Laz, Peter J
2009-04-01
An accurate assessment of shoulder kinematics is useful for understanding healthy normal and pathological mechanics. Small variability in identifying and locating anatomical landmarks (ALs) has potential to affect reported shoulder kinematics. The objectives of this study were to quantify the effect of landmark location variability on scapular and humeral kinematic descriptions for multiple subjects using probabilistic analysis methods, and to evaluate the consistency in results across multiple subjects. Data from 11 healthy subjects performing humeral elevation in the scapular plane were used to calculate Euler angles describing humeral and scapular kinematics. Probabilistic analyses were performed for each subject to simulate uncertainty in the locations of 13 upper-extremity ALs. For standard deviations of 4 mm in landmark location, the analysis predicted Euler angle envelopes between the 1 and 99 percentile bounds of up to 16.6 degrees . While absolute kinematics varied with the subject, the average 1-99% kinematic ranges for the motion were consistent across subjects and sensitivity factors showed no statistically significant differences between subjects. The description of humeral kinematics was most sensitive to the location of landmarks on the thorax, while landmarks on the scapula had the greatest effect on the description of scapular elevation. The findings of this study can provide a better understanding of kinematic variability, which can aid in making accurate clinical diagnoses and refining kinematic measurement techniques.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
NASA Astrophysics Data System (ADS)
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
Torres, Carmen; Glueck, Amanda C; Conrad, Shannon E; Morón, Ignacio; Papini, Mauricio R
2016-09-22
The dorsomedial striatum (DMS) has been implicated in the acquisition of reward representations, a proposal leading to the hypothesis that it should play a role in situations involving reward loss. We report the results of an experiment in which the effects of DMS excitotoxic lesions were tested in consummatory successive negative contrast (reward devaluation), autoshaping training with partial vs. continuous reinforcement (reward uncertainty), and appetitive extinction (reward omission). Animals with DMS lesions exhibited reduced lever pressing responding, but enhanced goal entries, during partial reinforcement training in autoshaping. However, they showed normal negative contrast, acquisition under continuous reinforcement (CR), appetitive extinction, and response facilitation in early extinction trials. Open-field testing also indicated normal motor behavior. Thus, DMS lesions selectively affected the behavioral adjustment to a situation involving reward uncertainty, producing a behavioral reorganization according to which goal tracking (goal entries) became predominant at the expense of sign tracking (lever pressing). This pattern of results shows that the function of the DMS in situations involving reward loss is not general, but restricted to reward uncertainty. We suggest that a nonassociative, drive-related process induced by reward uncertainty requires normal output from DMS neurons. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
NASA Astrophysics Data System (ADS)
Ngamroo, Issarachai
2010-12-01
It is well known that the superconducting magnetic energy storage (SMES) is able to quickly exchange active and reactive power with the power system. The SMES is expected to be the smart storage device for power system stabilization. Although the stabilizing effect of SMES is significant, the SMES is quite costly. Particularly, the superconducting magnetic coil size which is the essence of the SMES, must be carefully selected. On the other hand, various generation and load changes, unpredictable network structure, etc., cause system uncertainties. The power controller of SMES which is designed without considering such uncertainties, may not tolerate and loses stabilizing effect. To overcome these problems, this paper proposes the new design of robust SMES controller taking coil size and system uncertainties into account. The structure of the active and reactive power controllers is the 1st-order lead-lag compensator. No need for the exact mathematical representation, system uncertainties are modeled by the inverse input multiplicative perturbation. Without the difficulty of the trade-off of damping performance and robustness, the optimization problem of control parameters is formulated. The particle swarm optimization is used for solving the optimal parameters at each coil size automatically. Based on the normalized integral square error index and the consideration of coil current constraint, the robust SMES with the smallest coil size which still provides the satisfactory stabilizing effect, can be achieved. Simulation studies in the two-area four-machine interconnected power system show the superior robustness of the proposed robust SMES with the smallest coil size under various operating conditions over the non-robust SMES with large coil size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, C; Nanjing University of Aeronautics and Astronautics, Nanjing; Daartz, J
Purpose: To evaluate the accuracy of dose calculations by analytical dose calculation methods (ADC) for small field proton therapy in a gantry based passive scattering facility. Methods: 50 patients with intra-cranial disease were evaluated in the study. Treatment plans followed standard prescription and optimization procedures of proton stereotactic radiosurgery. Dose distributions calculated with the Monte Carlo (MC) toolkit TOPAS were used to represent delivered treatments. The MC dose was first adjusted using the output factor (OF) applied clinically. This factor is determined from the field size and the prescribed range. We then introduced a normalization factor to measure the differencemore » in mean dose between the delivered dose (MC dose with OF) and the dose calculated by ADC for each beam. The normalization was determined by the mean dose of the center voxels of the target area. We compared delivered dose distributions and those calculated by ADC in terms of dose volume histogram parameters and beam range distributions. Results: The mean target dose for a whole treatment is generally within 5% comparing delivered dose (MC dose with OF) and ADC dose. However, the differences can be as great as 11% for shallow and small target treated with a thick range compensator. Applying the normalization factor to the MC dose with OF can reduce the mean dose difference to less than 3%. Considering range uncertainties, the generally applied margins (3.5% of the prescribed range + 1mm) to cover uncertainties in range might not be sufficient to guarantee tumor coverage. The range difference for R90 (90% distal dose falloff) is affected by multiple factors, such as the heterogeneity index. Conclusion: This study indicates insufficient accuracy calculating proton doses using ADC. Our results suggest that uncertainties of target doses are reduced using MC techniques, improving the dosimetric accuracy for proton stereotactic radiosurgery. The work was supported by NIH/NCI under CA U19 021239. CG was partially supported by the Chinese Scholarship Council (CSC) and the National Natural Science Foundation of China (Grant No. 11475087).« less
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
NASA Astrophysics Data System (ADS)
Singh, A.; Serbin, S. P.; Kingdon, C.; Townsend, P. A.
2013-12-01
A major goal of remote sensing, and imaging spectroscopy in particular, is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties such as canopy chemistry across space and time. Existing methods must therefore be tested across a range of measurement approaches to identify and overcome limits to the consistent retrieval of such properties from spectroscopic imagery. Here we illustrate a general approach for the estimation of key foliar biochemical and morphological traits from spectroscopic imagery derived from the AVIRIS instrument and the propagation of errors from the leaf to the image scale using partial least squares regression (PLSR) techniques. Our method involves the integration of three types of data representing different scales of observation: At the image scale, the images were normalized for atmospheric, illumination and BRDF effects. Spectra from field plot locations were extracted from the 51AVIRIS images and were averaged when the field plot was larger than a single pixel. At the plot level, the scaling was conducted using multiple replicates (1000) derived from the leaf-level uncertainty estimates to generate plot-level estimates with their associated uncertainties. Leaf-level estimates of foliar traits (%N, %C, %Fiber, %Cellulose, %Lignin, LMA) were scaled to the canopy based on relative species composition of each plot. Image spectra were iteratively split into 50/50 randomized calibration-validation datasets and multiple (500) trait-predictive PLSR models were generated, this time sampling from within the plot-level uncertainty distribution. This allowed the propagation of uncertainty from the leaf-level dependent variables to the plot level, and finally to models built using AVIRIS image spectra. Moreover, this method allows us to generate spatially explicit maps of uncertainty in our sampled traits. Both LMA and %N PLSR models had a R2 greater than 0.8, root mean square errors (RMSEs) for both variables were less than 6% of the range of data. Fiber and lignin were predicted with R2 > 0.65 and carbon and cellulose greater than 0.5. Although R2 of these variables were lower than LMA and %N, their RMSE values were beneath 9% of the range of data. The comparatively lower R2 values for %C and cellulose in particular were related to the low amount of natural variability in these constituents. Further, coefficients from the randomized set of PLSR models were applied to imagery and aggregated to obtain pixel-wise predicted means and uncertainty estimates for each foliar trait. The resulting maps of nutritional and morphological properties together with their overall uncertainties represent a first-of-its-kind data product for examining the spatio-temporal patterns of forest functioning and nutrient cycling. These data are now being used to relate foliar traits with ecosystem processes such as streamwater nutrient export and insect herbivory. In addition, the ability to assign a retrieval uncertainty enables more efficient assimilation of these data products into ecosystem models to help constrain carbon and nutrient cycling projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebl, Jakob, E-mail: jakob.liebl@medaustron.at; Francis H. Burr Proton Therapy Center, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114; Department of Therapeutic Radiology and Oncology, Medical University of Graz, 8036 Graz
2014-09-15
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patientmore » positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R{sup 2} < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe.« less
Liebl, Jakob; Paganetti, Harald; Zhu, Mingyao; Winey, Brian A.
2014-01-01
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patient positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R2 < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe. PMID:25186386
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.
Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-08-16
Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Holzgrefe, Henry; Ferber, Georg; Champeroux, Pascal; Gill, Michael; Honda, Masaki; Greiter-Wilke, Andrea; Baird, Theodore; Meyer, Olivier; Saulnier, Muriel
2014-01-01
In vivo models have been required to demonstrate relative cardiac safety, but model sensitivity has not been systematically investigated. Cross-species and human translation of repolarization delay, assessed as QT/QTc prolongation, has not been compared employing common methodologies across multiple species and sites. Therefore, the accurate translation of repolarization results within and between preclinical species, and to man, remains problematic. Six pharmaceutical companies entered into an informal consortium designed to collect high-resolution telemetered data in multiple species (dog; n=34, cynomolgus; n=37, minipig; n=12, marmoset; n=14, guinea pig; n=5, and man; n=57). All animals received vehicle and varying doses of moxifloxacin (3-100 mg/kg, p.o.) with telemetered ECGs (≥500 Hz) obtained for 20-24h post-dose. Individual probabilistic QT-RR relationships were derived for each subject. The rate-correction efficacies of the individual (QTca) and generic correction formulae (Bazett, Fridericia, and Van de Water) were objectively assessed as the mean squared slopes of the QTc-RR relationships. Normalized moxifloxacin QTca responses (Veh Δ%/μM) were derived for 1h centered on the moxifloxacin Tmax. All QT-RR ranges demonstrated probabilistic uncertainty; slopes varied distinctly by species where dog and human exhibited the lowest QT rate-dependence, which was much steeper in the cynomolgus and guinea pig. Incorporating probabilistic uncertainty, the normalized QTca-moxifloxacin responses were similarly conserved across all species, including man. The current results provide the first unambiguous evidence that all preclinical in vivo repolarization assays, when accurately modeled and evaluated, yield results that are consistent with the conservation of moxifloxacin-induced QT prolongation across all common preclinical species. Furthermore, these outcomes are directly transferable across all species including man. The consortium results indicate that the implementation of standardized QTc data presentation, QTc reference cycle lengths, and rate-correction coefficients can markedly improve the concordance of preclinical and clinical outcomes in most preclinical species. Copyright © 2013 Elsevier Inc. All rights reserved.
The interplanetary shock of September 24, 1998: Arrival at Earth
NASA Astrophysics Data System (ADS)
Russell, C. T.; Wang, Y. L.; Raeder, J.; Tokar, R. L.; Smith, C. W.; Ogilvie, K. W.; Lazarus, A. J.; Lepping, R. P.; Szabo, A.; Kawano, H.; Mukai, T.; Savin, S.; Yermolaev, Y. I.; Zhou, X.-Y.; Tsurutani, B. T.
2000-11-01
At close to 2345 UT on September 24, 1998, the magnetosphere was suddenly compressed by the passage of an interplanetary shock. In order to properly interpret the magnetospheric events triggered by the arrival of this shock, we calculate the orientation of the shock, its velocity, and its estimated time of arrival at the nose of the magnetosphere. Our best fit shock normal has an orientation of (-0.981 -0.157 -0.112) in solar ecliptic coordinates, a speed of 769 km/s, and an arrival time of 2344:19 at the magnetopause at 10 RE. Since measurements of the solar wind and interplanetary magnetic field are available from multiple spacecraft, we can compare several different techniques of shock-normal determination. Of the single spacecraft techniques the magnetic coplanarity solution is most accurate and the mixed mode solution is of lesser accuracy. Uncertainty in the timing and location of the IMP 8 spacecraft limits the accuracy of solutions using the time of arrival at the position of IMP 8.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
Horsky, Monika; Irrgeher, Johanna; Prohaska, Thomas
2016-01-01
This paper critically reviews the state-of-the-art of isotope amount ratio measurements by solution-based multi-collector inductively coupled plasma mass spectrometry (MC ICP-MS) and presents guidelines for corresponding data reduction strategies and uncertainty assessments based on the example of n((87)Sr)/n((86)Sr) isotope ratios. This ratio shows variation attributable to natural radiogenic processes and mass-dependent fractionation. The applied calibration strategies can display these differences. In addition, a proper statement of uncertainty of measurement, including all relevant influence quantities, is a metrological prerequisite. A detailed instructive procedure for the calculation of combined uncertainties is presented for Sr isotope amount ratios using three different strategies of correction for instrumental isotopic fractionation (IIF): traditional internal correction, standard-sample bracketing, and a combination of both, using Zr as internal standard. Uncertainties are quantified by means of a Kragten spreadsheet approach, including the consideration of correlations between individual input parameters to the model equation. The resulting uncertainties are compared with uncertainties obtained from the partial derivatives approach and Monte Carlo propagation of distributions. We obtain relative expanded uncertainties (U rel; k = 2) of n((87)Sr)/n((86)Sr) of < 0.03 %, when normalization values are not propagated. A comprehensive propagation, including certified values and the internal normalization ratio in nature, increases relative expanded uncertainties by about factor two and the correction for IIF becomes the major contributor.
Assessment the impact of samplers change on the uncertainty related to geothermalwater sampling
NASA Astrophysics Data System (ADS)
Wątor, Katarzyna; Mika, Anna; Sekuła, Klaudia; Kmiecik, Ewa
2018-02-01
The aim of this study is to assess the impact of samplers change on the uncertainty associated with the process of the geothermal water sampling. The study was carried out on geothermal water exploited in Podhale region, southern Poland (Małopolska province). To estimate the uncertainty associated with sampling the results of determinations of metasilicic acid (H2SiO3) in normal and duplicate samples collected in two series were used (in each series the samples were collected by qualified sampler). Chemical analyses were performed using ICP-OES method in the certified Hydrogeochemical Laboratory of the Hydrogeology and Engineering Geology Department at the AGH University of Science and Technology in Krakow (Certificate of Polish Centre for Accreditation No. AB 1050). To evaluate the uncertainty arising from sampling the empirical approach was implemented, based on double analysis of normal and duplicate samples taken from the same well in the series of testing. The analyses of the results were done using ROBAN software based on technique of robust statistics analysis of variance (rANOVA). Conducted research proved that in the case of qualified and experienced samplers uncertainty connected with the sampling can be reduced what results in small measurement uncertainty.
Living With Ambiguity: A Metasynthesis of Qualitative Research on Mild Cognitive Impairment
Gomersall, Tim; Astell, Arlene; Nygård, Louise; Sixsmith, Andrew; Mihailidis, Alex; Hwang, Amy
2015-01-01
Purpose of the study: Mild Cognitive Impairment (MCI) is a diagnosis proposed to describe an intermediate state between normal cognitive aging and dementia. MCI has been criticised for its conceptual fuzziness, its ambiguous relationship to dementia, and the tension it creates between medical and sociological understandings of “normal aging”. Design and Methods: We examined the published qualitative literature on experiences of being diagnosed and living with MCI using metasynthesis as the methodological framework. Results: Two overarching conceptual themes were developed. The first, MCI and myself-in-time, showed that a diagnosis of MCI could profoundly affect a person’s understanding of their place in the world. This impact appears to be mediated by multiple factors including a person’s social support networks, which daily activities are affected, and subjective interpretations of the meaning of MCI. The second theme, Living with Ambiguity, describes the difficulties people experienced in making sense of their diagnosis. Uncertainty arose, in part, from lack of clarity and consistency in the information received by people with MCI, including whether they are even told MCI is the diagnosis. Implications: We conclude by suggesting an ethical tension is always at play when a MCI diagnosis is made. Specifically, earlier support and services afforded by a diagnosis may come at the expense of a person’s anxiety about the future, with continued uncertainty about how his or her concerns and needs can be addressed. PMID:26315317
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy.
Carriger, John F; Barron, Mace G; Newman, Michael C
2016-12-20
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.
Douglas, Alexander D.; Edwards, Nick J.; Duncan, Christopher J. A.; Thompson, Fiona M.; Sheehy, Susanne H.; O'Hara, Geraldine A.; Anagnostou, Nicholas; Walther, Michael; Webster, Daniel P.; Dunachie, Susanna J.; Porter, David W.; Andrews, Laura; Gilbert, Sarah C.; Draper, Simon J.; Hill, Adrian V. S.; Bejon, Philip
2013-01-01
Controlled human malaria infection is used to measure efficacy of candidate malaria vaccines before field studies are undertaken. Mathematical modeling using data from quantitative polymerase chain reaction (qPCR) parasitemia monitoring can discriminate between vaccine effects on the parasite's liver and blood stages. Uncertainty regarding the most appropriate modeling method hinders interpretation of such trials. We used qPCR data from 267 Plasmodium falciparum infections to compare linear, sine-wave, and normal-cumulative-density-function models. We find that the parameters estimated by these models are closely correlated, and their predictive accuracy for omitted data points was similar. We propose that future studies include the linear model. PMID:23570846
Planar Laser Imaging of Sprays for Liquid Rocket Studies
NASA Technical Reports Server (NTRS)
Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.
1990-01-01
A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.
2011-12-01
In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.
[Transformer winding's temperature rising and an analysis of its uncertainty].
Wang, Pei-Lian; Chen, Yu-En; Zhong, Sheng-Kui
2007-09-01
This paper introduces the temperature rising experimental process and some matters needing attention when the transformer is normally loading. And an analysis of the uncertainty for transformer's temperature rising is also made based on the practical examples' data.
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
ERIC Educational Resources Information Center
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigase, Yves
2007-07-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less
CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrer, R.; Rhodes, J.; Smith, K.
2012-07-01
The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Li, Haibin; He, Yun; Nie, Xiaobo
2018-01-01
Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.
Uncertainty evaluation with increasing borehole drilling in subsurface hydrogeological explorations
NASA Astrophysics Data System (ADS)
Amano, K.; Ohyama, T.; Kumamoto, S.; Shimo, M.
2016-12-01
Quantities of drilling boreholes have been a difficult subject for field investigators in such as subsurface hydrogeological explorations. This problem becomes a bigger in heterogeneous formations or rock masses so we need to develop quantitative criteria for evaluating uncertainties during borehole investigations.To test an uncertainty reduction with increasing boreholes, we prepared a simple hydrogeological model and virtual hydraulic tests were carried out by using this model. The model consists of 125,000 elements of which hydraulic conductivities are generated randomly from the log-normal distribution in a 2-kilometer cube. Uncertainties were calculated by the difference of head distributions between the original model and the inchoate models made by virtual hydraulic test one by one.The results show the level and the variance of uncertainty are strongly correlated to the average and variance of the hydraulic conductivities. This kind of trends also could be seen in the actual field data obtained from the deep borehole investigations in Horonobe Town, northern Hokkaido, Japan. Here, a new approach using fractional bias (FB) and normalized mean square error (NMSE) for evaluating uncertainty characteristics will be introduced and the possibility of use as an indicator for decision making (i.e. to stop borehole drilling or to continue borehole drilling) in field investigations will be discussed.
Markov logic network based complex event detection under uncertainty
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Jia, Bin; Chen, Genshe; Chen, Hua-mei; Sullivan, Nichole; Pham, Khanh; Blasch, Erik
2018-05-01
In a cognitive reasoning system, the four-stage Observe-Orient-Decision-Act (OODA) reasoning loop is of interest. The OODA loop is essential for the situational awareness especially in heterogeneous data fusion. Cognitive reasoning for making decisions can take advantage of different formats of information such as symbolic observations, various real-world sensor readings, or the relationship between intelligent modalities. Markov Logic Network (MLN) provides mathematically sound technique in presenting and fusing data at multiple levels of abstraction, and across multiple intelligent sensors to conduct complex decision-making tasks. In this paper, a scenario about vehicle interaction is investigated, in which uncertainty is taken into consideration as no systematic approaches can perfectly characterize the complex event scenario. MLNs are applied to the terrestrial domain where the dynamic features and relationships among vehicles are captured through multiple sensors and information sources regarding the data uncertainty.
Uncertainty relation for non-Hamiltonian quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarasov, Vasily E.
2013-01-15
General forms of uncertainty relations for quantum observables of non-Hamiltonian quantum systems are considered. Special cases of uncertainty relations are discussed. The uncertainty relations for non-Hamiltonian quantum systems are considered in the Schroedinger-Robertson form since it allows us to take into account Lie-Jordan algebra of quantum observables. In uncertainty relations, the time dependence of quantum observables and the properties of this dependence are discussed. We take into account that a time evolution of observables of a non-Hamiltonian quantum system is not an endomorphism with respect to Lie, Jordan, and associative multiplications.
Uncertainty Analysis of Consequence Management (CM) Data Products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole
The goal of this project is to develop and execute methods for characterizing uncertainty in data products that are deve loped and distributed by the DOE Consequence Management (CM) Program. A global approach to this problem is necessary because multiple sources of error and uncertainty from across the CM skill sets contribute to the ultimate p roduction of CM data products. This report presents the methods used to develop a probabilistic framework to characterize this uncertainty and provides results for an uncertainty analysis for a study scenario analyzed using this framework.
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality
Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-01-01
Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Robinson, Mike J F; Anselme, Patrick; Fischer, Adam M; Berridge, Kent C
2014-06-01
Uncertainty is a component of many gambling games and may play a role in incentive motivation and cue attraction. Uncertainty can increase the attractiveness for predictors of reward in the Pavlovian procedure of autoshaping, visible as enhanced sign-tracking (or approach and nibbles) by rats of a metal lever whose sudden appearance acts as a conditioned stimulus (CS+) to predict sucrose pellets as an unconditioned stimulus (UCS). Here we examined how reward uncertainty might enhance incentive salience as sign-tracking both in intensity and by broadening the range of attractive CS+s. We also examined whether initially induced uncertainty enhancements of CS+ attraction can endure beyond uncertainty itself, and persist even when Pavlovian prediction becomes 100% certain. Our results show that uncertainty can broaden incentive salience attribution to make CS cues attractive that would otherwise not be (either because they are too distal from reward or too risky to normally attract sign-tracking). In addition, uncertainty enhancement of CS+ incentive salience, once induced by initial exposure, persisted even when Pavlovian CS-UCS correlations later rose toward 100% certainty in prediction. Persistence suggests an enduring incentive motivation enhancement potentially relevant to gambling, which in some ways resembles incentive-sensitization. Higher motivation to uncertain CS+s leads to more potent attraction to these cues when they predict the delivery of uncertain rewards. In humans, those cues might possibly include the sights and sounds associated with gambling, which contribute a major component of the play immersion experienced by problematic gamblers. Copyright © 2014 Elsevier B.V. All rights reserved.
Robinson, Mike J. F.; Anselme, Patrick; Fischer, Adam M.; Berridge, Kent C.
2014-01-01
Uncertainty is a component of many gambling games and may play a role in incentive motivation and cue attraction. Uncertainty can increase the attractiveness for predictors of reward in the Pavlovian procedure of autoshaping, visible as enhanced sign-tracking (or approach and nibbles) by rats of a metal lever whose sudden appearance acts as a conditioned stimulus (CS+) to predict sucrose pellets as an unconditioned stimulus (UCS). Here we examined how reward uncertainty might enhance incentive salience as sign-tracking both in intensity and by broadening the range of attractive CS+s. We also examined whether initially-induced uncertainty enhancements of CS+ attraction can endure beyond uncertainty itself, and persist even when Pavlovian prediction becomes 100% certain. Our results show that uncertainty can broaden incentive salience attribution to make CS cues attractive that would otherwise not be (either because they are too distal from reward or too risky to normally attract sign-tracking). In addition, uncertainty enhancement of CS+ incentive salience, once induced by initial exposure, persisted even when Pavlovian CS-UCS correlations later rose toward 100% certainty in prediction. Persistence suggests an enduring incentive motivation enhancement potentially relevant to gambling, which in some ways resembles incentive-sensitization. Higher motivation to uncertain CS+s leads to more potent attraction to these cues when they predict the delivery of uncertain rewards. In humans, those cues might possibly include the sights and sounds associated with gambling, which contribute a major component of the play immersion experienced by problematic gamblers. PMID:24631397
Neural network L1 adaptive control of MIMO systems with nonlinear uncertainty.
Zhen, Hong-tao; Qi, Xiao-hui; Li, Jie; Tian, Qing-min
2014-01-01
An indirect adaptive controller is developed for a class of multiple-input multiple-output (MIMO) nonlinear systems with unknown uncertainties. This control system is comprised of an L 1 adaptive controller and an auxiliary neural network (NN) compensation controller. The L 1 adaptive controller has guaranteed transient response in addition to stable tracking. In this architecture, a low-pass filter is adopted to guarantee fast adaptive rate without generating high-frequency oscillations in control signals. The auxiliary compensation controller is designed to approximate the unknown nonlinear functions by MIMO RBF neural networks to suppress the influence of uncertainties. NN weights are tuned on-line with no prior training and the project operator ensures the weights bounded. The global stability of the closed-system is derived based on the Lyapunov function. Numerical simulations of an MIMO system coupled with nonlinear uncertainties are used to illustrate the practical potential of our theoretical results.
Resonance region measurements of dysprosium and rhenium
NASA Astrophysics Data System (ADS)
Leinweber, Gregory; Block, Robert C.; Epping, Brian E.; Barry, Devin P.; Rapp, Michael J.; Danon, Yaron; Donovan, Timothy J.; Landsberger, Sheldon; Burke, John A.; Bishop, Mary C.; Youmans, Amanda; Kim, Guinyun N.; Kang, yeong-rok; Lee, Man Woo; Drindak, Noel J.
2017-09-01
Neutron capture and transmission measurements have been performed, and resonance parameter analysis has been completed for dysprosium, Dy, and rhenium, Re. The 60 MeV electron accelerator at RPI Gaerttner LINAC Center produced neutrons in the thermal and epithermal energy regions for these measurements. Transmission measurements were made using 6Li glass scintillation detectors. The neutron capture measurements were made with a 16-segment NaI multiplicity detector. The detectors for all experiments were located at ≈25 m except for thermal transmission, which was done at ≈15 m. The dysprosium samples included one highly enriched 164Dy metal, 6 liquid solutions of enriched 164Dy, two natural Dy metals. The Re samples were natural metals. Their capture yield normalizations were corrected for their high gamma attenuation. The multi-level R-matrix Bayesian computer code SAMMY was used to extract the resonance parameters from the data. 164Dy resonance data were analyzed up to 550 eV, other Dy isotopes up to 17 eV, and Re resonance data up to 1 keV. Uncertainties due to resolution function, flight path, burst width, sample thickness, normalization, background, and zero time were estimated and propagated using SAMMY. An additional check of sample-to-sample consistency is presented as an estimate of uncertainty. The thermal total cross sections and neutron capture resonance integrals of 164Dy and Re were determined from the resonance parameters. The NJOY and INTER codes were used to process and integrate the cross sections. Plots of the data, fits, and calculations using ENDF/B-VII.1 resonance parameters are presented.
Neudecker, Denise; Taddeucci, Terry Nicholas; Haight, Robert Cameron; ...
2016-01-06
The spectrum of neutrons emitted promptly after 239Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed with themore » improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Furthermore, given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the 239Pu PFNS as a ratio to either the 235U or 252Cf PFNS.« less
Merging information from multi-model flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
Parental Deployment and Youth in Military Families: Exploring Uncertainty and Ambiguous Loss
ERIC Educational Resources Information Center
Huebner, Angela J.; Mancini, Jay A.; Wilcox, Ryan M.; Grass, Saralyn R.; Grass, Gabriel A.
2007-01-01
Parental deployment has substantial effects on the family system, among them ambiguity and uncertainty. Youth in military families are especially affected by parental deployment because their coping repertoire is only just developing; the requirements of deployment become additive to normal adolescent developmental demands. Focus groups were used…
USDA-ARS?s Scientific Manuscript database
Management zones for various crops have been delineated using NDVI (Normalized Difference Vegetation Index), apparent bulk soil electrical conductivity (ECa - Veris), and yield data; however, estimations of uncertainty for these data layers are equally important considerations. The objective of this...
NASA Astrophysics Data System (ADS)
Bobovnik, G.; Kutin, J.; Bajsić, I.
2016-08-01
This paper deals with an uncertainty analysis of gas flow measurements using a compact, high-speed, clearance-sealed realization of a piston prover. A detailed methodology for the uncertainty analysis, covering the components due to the gas density, dimensional and time measurements, the leakage flow, the density correction factor and the repeatability, is presented. The paper also deals with the selection of the isothermal and adiabatic measurement models, the treatment of the leakage flow and discusses the need for averaging multiple consecutive readings of the piston prover. The analysis is prepared for the flow range (50 000:1) covered by the three interchangeable flow cells. The results show that using the adiabatic measurement model and averaging the multiple readings, the estimated expanded measurement uncertainty of the gas mass flow rate is less than 0.15% in the flow range above 0.012 g min-1, whereas it increases for lower mass flow rates due to the leakage flow related effects. At the upper end of the measuring range, using the adiabatic instead of the isothermal measurement model, as well as averaging multiple readings, proves important.
Spatial curvilinear path following control of underactuated AUV with multiple uncertainties.
Miao, Jianming; Wang, Shaoping; Zhao, Zhiping; Li, Yuan; Tomovic, Mileta M
2017-03-01
This paper investigates the problem of spatial curvilinear path following control of underactuated autonomous underwater vehicles (AUVs) with multiple uncertainties. Firstly, in order to design the appropriate controller, path following error dynamics model is constructed in a moving Serret-Frenet frame, and the five degrees of freedom (DOFs) dynamic model with multiple uncertainties is established. Secondly, the proposed control law is separated into kinematic controller and dynamic controller via back-stepping technique. In the case of kinematic controller, to overcome the drawback of dependence on the accurate vehicle model that are present in a number of path following control strategies described in the literature, the unknown side-slip angular velocity and attack angular velocity are treated as uncertainties. Whereas in the case of dynamic controller, the model parameters perturbations, unknown external environmental disturbances and the nonlinear hydrodynamic damping terms are treated as lumped uncertainties. Both kinematic and dynamic uncertainties are estimated and compensated by designed reduced-order linear extended state observes (LESOs). Thirdly, feedback linearization (FL) based control law is implemented for the control model using the estimates generated by reduced-order LESOs. For handling the problem of computational complexity inherent in the conventional back-stepping method, nonlinear tracking differentiators (NTDs) are applied to construct derivatives of the virtual control commands. Finally, the closed loop stability for the overall system is established. Simulation and comparative analysis demonstrate that the proposed controller exhibits enhanced performance in the presence of internal parameter variations, external unknown disturbances, unmodeled nonlinear damping terms, and measurement noises. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
Analyzing Uncertainty and Risk in the Management of Water Resources in the State Of Texas
NASA Astrophysics Data System (ADS)
Singh, A.; Hauffpauir, R.; Mishra, S.; Lavenue, M.
2010-12-01
The State of Texas updates its state water plan every five years to determine the water demand required to meet its growing population. The plan compiles forecasts of water deficits from state-wide regional water planning groups as well as the water supply strategies to address these deficits. To date, the plan has adopted a deterministic framework, where reference values (e.g., best estimates, worst-case scenario) are used for key factors such as population growth, demand for water, severity of drought, water availability, etc. These key factors can, however, be affected by multiple sources of uncertainties such as - the impact of climate on surface water and groundwater availability, uncertainty in population projections, changes in sectoral composition of the economy, variability in water usage, feasibility of the permitting process, cost of implementation, etc. The objective of this study was to develop a generalized and scalable methodology for addressing uncertainty and risk in water resources management both at the regional and the local water planning level. The study proposes a framework defining the elements of an end-to-end system model that captures the key components of demand, supply and planning modules along with their associated uncertainties. The framework preserves the fundamental elements of the well-established planning process in the State of Texas, promoting an incremental and stakeholder-driven approach to adding different levels of uncertainty (and risk) into the decision-making environment. The uncertainty in the water planning process is broken down into two primary categories: demand uncertainty and supply uncertainty. Uncertainty in Demand is related to the uncertainty in population projections and the per-capita usage rates. Uncertainty in Supply, in turn, is dominated by the uncertainty in future climate conditions. Climate is represented in terms of time series of precipitation, temperature and/or surface evaporation flux for some future time period of interest, which can be obtained as outputs of global climate models (GCMs). These are then linked with hydrologic and water-availability models (WAMs) to estimate water availability for the worst drought conditions under each future climate scenario. Combining the demand scenarios with the water availability scenarios yields multiple scenarios for water shortage (or surplus). Given multiple shortage/surplus scenarios, various water management strategies can be assessed to evaluate the reliability of meeting projected deficits. These reliabilities are then used within a multi-criteria decision-framework to assess trade-offs between various water management objectives, thus helping to make more robust decisions while planning for the water needs of the future.
NASA Astrophysics Data System (ADS)
Saleh, Ziad; Thor, Maria; Apte, Aditya P.; Sharp, Gregory; Tang, Xiaoli; Veeraraghavan, Harini; Muren, Ludvig; Deasy, Joseph
2016-08-01
Deformable image registration (DIR) is essential for adaptive radiotherapy (RT) for tumor sites subject to motion, changes in tumor volume, as well as changes in patient normal anatomy due to weight loss. Several methods have been published to evaluate DIR-related uncertainties but they are not widely adopted. The aim of this study was, therefore, to evaluate intra-patient DIR for two highly deformable organs—the bladder and the rectum—in prostate cancer RT using a quantitative metric based on multiple image registration, the distance discordance metric (DDM). Voxel-by-voxel DIR uncertainties of the bladder and rectum were evaluated using DDM on weekly CT scans of 38 subjects previously treated with RT for prostate cancer (six scans/subject). The DDM was obtained from group-wise B-spline registration of each patient’s collection of repeat CT scans. For each structure, registration uncertainties were derived from DDM-related metrics. In addition, five other quantitative measures, including inverse consistency error (ICE), transitivity error (TE), Dice similarity (DSC) and volume ratios between corresponding structures from pre- and post- registered images were computed and compared with the DDM. The DDM varied across subjects and structures; DDMmean of the bladder ranged from 2 to 13 mm and from 1 to 11 mm for the rectum. There was a high correlation between DDMmean of the bladder and the rectum (Pearson’s correlation coefficient, R p = 0.62). The correlation between DDMmean and the volume ratios post-DIR was stronger (R p = 0.51 0.68) than the correlation with the TE (bladder: R p = 0.46 rectum: R p = 0.47), or the ICE (bladder: R p = 0.34 rectum: R p = 0.37). There was a negative correlation between DSC and DDMmean of both the bladder (R p = -0.23) and the rectum (R p = -0.63). The DDM uncertainty metric indicated considerable DIR variability across subjects and structures. Our results show a stronger correlation with volume ratios and with the DSC using DDM compared to using ICE and TE. The DDM has the potential to quantitatively identify regions of large DIR uncertainties and consequently identify anatomical/scan outliers. The DDM can, thus, be applied to improve the adaptive RT process for tumor sites subject to motion.
Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier
2016-03-01
Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical specificity.
Song, Wei; Cao, Yang; Wang, Dandan; Hou, Guojun; Shen, Zaihua; Zhang, Shuangbao
2015-01-01
As a large producer and consumer of wood building materials, China suffers product formaldehyde emissions (PFE) but lacks systematic investigations and basic data on Chinese standard emission tests (CST), so this paper presented a first effort on this issue. The PFE of fiberboards, particleboards, blockboards, floorings, and parquets manufactured in Beijing region were characterized by the perforator extraction method (PE), 9–11 L and 40 L desiccator methods (D9, D40), and environmental chamber method (EC) of the Chinese national standard GB 18580; based on statistics of PFE data, measurement uncertainties in CST were evaluated by the Monte Carlo method; moreover, PFE data correlations between tests were established. Results showed: (1) Different tests may give slightly different evaluations on product quality. In PE and D9 tests, blockboards and parquets reached E1 grade for PFE, which can be directly used in indoor environment; but in D40 and EC tests, floorings and parquets achieved E1. (2) In multiple tests, PFE data characterized by PE, D9, and D40 complied with Gaussian distributions, while those characterized by EC followed log-normal distributions. Uncertainties in CST were overall low, with uncertainties for 20 material-method combinations all below 7.5%, and the average uncertainty for each method under 3.5%, thus being acceptable in engineering application. A more complicated material structure and a larger test scale caused higher uncertainties. (3) Conventional linear models applied to correlating PFE values between PE, D9, and EC, with R2 all over 0.840, while novel logarithmic (exponential) models can work better for correlations involving D40, with R2 all beyond 0.901. This research preliminarily demonstrated the effectiveness of CST, where results for D40 presented greater similarities to EC—the currently most reliable test for PFE, thus highlighting the potential of Chinese D40 as a more practical approach in production control and risk assessment. PMID:26656316
Orbital Debris Shape and Orientation Effects on Ballistic Limits
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Williamsen, Joel E.
2005-01-01
The SPHC hydrodynamic code was used to evaluate the effects of orbital debris particle shape and orientation on penetration of a typical spacecraft dual-wall shield. Impacts were simulated at near-normal obliquity at 12 km/sec. Debris cloud characteristics and damage potential are compared with those from impacts by spherical projectiles. Results of these simulations indicate the uncertainties in the predicted ballistic limits due to modeling uncertainty and to uncertainty in the impactor orientation.
Hisano, Mizue; Connolly, Sean R; Robbins, William D
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing.
Hisano, Mizue; Connolly, Sean R.; Robbins, William D.
2011-01-01
Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. PMID:21966402
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
NASA Technical Reports Server (NTRS)
Nagpal, Vinod K.
1988-01-01
The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Flach, G. P.
2012-12-01
The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluation of the multi-model CORDEX-Africa hindcast using RCMES
NASA Astrophysics Data System (ADS)
Kim, J.; Waliser, D. E.; Lean, P.; Mattmann, C. A.; Goodale, C. E.; Hart, A.; Zimdars, P.; Hewitson, B.; Jones, C.
2011-12-01
Recent global climate change studies have concluded with a high confidence level that the observed increasing trend in the global-mean surface air temperatures since mid-20th century is triggered by the emission of anthropogenic greenhouse gases (GHGs). The increase in the global-mean temperature due to anthropogenic emissions is nearly monotonic and may alter the climatological norms resulting in a new climate normal. In the presence of anthropogenic climate change, assessing regional impacts of the altered climate state and developing the plans for mitigating any adverse impacts are an important concern. Assessing future climate state and its impact remains a difficult task largely because of the uncertainties in future emissions and model errors. Uncertainties in climate projections propagates into impact assessment models and result in uncertainties in the impact assessments. In order to facilitate the evaluation of model data, a fundamental step for assessing model errors, the JPL Regional Climate Model Evaluation System (RCMES: Lean et al. 2010; Hart et al. 2011) has been developed through a joint effort of the investigators from UCLA and JPL. RCMES is also a regional climate component of a larger worldwide ExArch project. We will present the evaluation of the surface temperatures and precipitation from multiple RCMs participating in the African component of the Coordinated Regional Climate Downscaling Experiment (CORDEX) that has organized a suite of regional climate projection experiments in which multiple RCMs and GCMs are incorporated. As a part of the project, CORDEX organized a 20-year regional climate hindcast study in order to quantify and understand the uncertainties originating from model errors. Investigators from JPL, UCLA, and the CORDEX-Africa team collaborate to analyze the RCM hindcast data using RCMES. The analysis is focused on measuring the closeness between individual regional climate model outputs as well as their ensembles and observed data. The model evaluation is quantified in terms of widely used metrics. Details on the conceptual outline and architecture of RCMES is presented in two companion papers "The Regional climate model Evaluation System (RCMES) based on contemporary satellite and other observations for assessing regional climate model fidelity" and "A Reusable Framework for Regional Climate Model Evaluation" in GC07 and IN30, respectively.
Unique geologic insights from "non-unique" gravity and magnetic interpretation
Saltus, R.W.; Blakely, R.J.
2011-01-01
Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are always possible. The rigorous mathematical label of "nonuniqueness" can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this article is to present a practical perspective on the theoretical non-uniqueness of potential-field interpretation in geology. There are multiple ways to approach and constrain potential-field studies to produce significant, robust, and definitive results. The "non-uniqueness" of potential-field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neudecker, D., E-mail: dneudecker@lanl.gov; Taddeucci, T.N.; Haight, R.C.
2016-01-15
The spectrum of neutrons emitted promptly after {sup 239}Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed withmore » the improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the {sup 239}Pu PFNS as a ratio to either the {sup 235}U or {sup 252}Cf PFNS.« less
2016-04-30
determining the optimal design requirements of a new system, which will operate along with other existing systems to provide a set of overarching...passenger airline transportation (Mane et al., 2007; Govindaraju et al., 2015). Uncertainty in Fleet Operations The uncertainty associated with the...demand can provide the basis for a commercial passenger airline problem. The operations of the commercial air travel industry differ from military
Zamengo, Luca; Frison, Giampietro; Tedeschi, Gianpaola; Frasson, Samuela; Zancanaro, Flavio; Sciarrone, Rocco
2014-10-01
The measurement of blood-alcohol content (BAC) is a crucial analytical determination required to assess if an offence (e.g. driving under the influence of alcohol) has been committed. For various reasons, results of forensic alcohol analysis are often challenged by the defence. As a consequence, measurement uncertainty becomes a critical topic when assessing compliance with specification limits for forensic purposes. The aims of this study were: (1) to investigate major sources of variability for BAC determinations; (2) to estimate measurement uncertainty for routine BAC determinations; (3) to discuss the role of measurement uncertainty in compliance assessment; (4) to set decision rules for a multiple BAC threshold law, as provided in the Italian Highway Code; (5) to address the topic of the zero-alcohol limit from the forensic toxicology point of view; and (6) to discuss the role of significant figures and rounding errors on measurement uncertainty and compliance assessment. Measurement variability was investigated by the analysis of data collected from real cases and internal quality control. The contribution of both pre-analytical and analytical processes to measurement variability was considered. The resulting expanded measurement uncertainty was 8.0%. Decision rules for the multiple BAC threshold Italian law were set by adopting a guard-banding approach. 0.1 g/L was chosen as cut-off level to assess compliance with the zero-alcohol limit. The role of significant figures and rounding errors in compliance assessment was discussed by providing examples which stressed the importance of these topics for forensic purposes. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Comparison of model estimated and measured direct-normal solar irradiance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halthore, R.N.; Schwartz, S.E.; Michalsky, J.J.
1997-12-01
Direct-normal solar irradiance (DNSI), the energy in the solar spectrum incident in unit time at the Earth{close_quote}s surface on a unit area perpendicular to the direction to the Sun, depends only on atmospheric extinction of solar energy without regard to the details of the extinction, whether absorption or scattering. Here we report a set of closure experiments performed in north central Oklahoma in April 1996 under cloud-free conditions, wherein measured atmospheric composition and aerosol optical thickness are input to a radiative transfer model, MODTRAN 3, to estimate DNSI, which is then compared with measured values obtained with normal incidence pyrheliometersmore » and absolute cavity radiometers. Uncertainty in aerosol optical thickness (AOT) dominates the uncertainty in DNSI calculation. AOT measured by an independently calibrated Sun photometer and a rotating shadow-band radiometer agree to within the uncertainties of each measurement. For 36 independent comparisons the agreement between measured and model-estimated values of DNSI falls within the combined uncertainties in the measurement (0.3{endash}0.7{percent}) and model calculation (1.8{percent}), albeit with a slight average model underestimate ({minus}0.18{plus_minus}0.94){percent}; for a DNSI of 839Wm{sup {minus}2} this corresponds to {minus}1.5{plus_minus}7.9Wm{sup {minus}2}. The agreement is nearly independent of air mass and water-vapor path abundance. These results thus establish the accuracy of the current knowledge of the solar spectrum, its integrated power, and the atmospheric extinction as a function of wavelength as represented in MODTRAN 3. An important consequence is that atmospheric absorption of short-wave energy is accurately parametrized in the model to within the above uncertainties. {copyright} 1997 American Geophysical Union« less
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Kobashi, Keiji; Prayongrat, Anussara; Kimoto, Takuya; Toramatsu, Chie; Dekura, Yasuhiro; Katoh, Norio; Shimizu, Shinichi; Ito, Yoichi M; Shirato, Hiroki
2018-03-01
Modern radiotherapy technologies such as proton beam therapy (PBT) permit dose escalation to the tumour and minimize unnecessary doses to normal tissues. To achieve appropriate patient selection for PBT, a normal tissue complication probability (NTCP) model can be applied to estimate the risk of treatment-related toxicity relative to X-ray therapy (XRT). A methodology for estimating the difference in NTCP (∆NTCP), including its uncertainty as a function of dose to normal tissue, is described in this study using the Delta method, a statistical method for evaluating the variance of functions, considering the variance-covariance matrix. We used a virtual individual patient dataset of radiation-induced liver disease (RILD) in liver tumour patients who were treated with XRT as a study model. As an alternative option for individual patient data, dose-bin data, which consists of the number of patients who developed toxicity in each dose level/bin and the total number of patients in that dose level/bin, are useful for multi-institutional data sharing. It provides comparable accuracy with individual patient data when using the Delta method. With reliable NTCP models, the ∆NTCP with uncertainty might potentially guide the use of PBT; however, clinical validation and a cost-effectiveness study are needed to determine the appropriate ∆NTCP threshold.
Kobashi, Keiji; Kimoto, Takuya; Toramatsu, Chie; Dekura, Yasuhiro; Katoh, Norio; Shimizu, Shinichi; Ito, Yoichi M; Shirato, Hiroki
2018-01-01
Abstract Modern radiotherapy technologies such as proton beam therapy (PBT) permit dose escalation to the tumour and minimize unnecessary doses to normal tissues. To achieve appropriate patient selection for PBT, a normal tissue complication probability (NTCP) model can be applied to estimate the risk of treatment-related toxicity relative to X-ray therapy (XRT). A methodology for estimating the difference in NTCP (∆NTCP), including its uncertainty as a function of dose to normal tissue, is described in this study using the Delta method, a statistical method for evaluating the variance of functions, considering the variance–covariance matrix. We used a virtual individual patient dataset of radiation-induced liver disease (RILD) in liver tumour patients who were treated with XRT as a study model. As an alternative option for individual patient data, dose-bin data, which consists of the number of patients who developed toxicity in each dose level/bin and the total number of patients in that dose level/bin, are useful for multi-institutional data sharing. It provides comparable accuracy with individual patient data when using the Delta method. With reliable NTCP models, the ∆NTCP with uncertainty might potentially guide the use of PBT; however, clinical validation and a cost-effectiveness study are needed to determine the appropriate ∆NTCP threshold. PMID:29538699
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.
2017-12-01
The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.
The neural representation of unexpected uncertainty during value-based decision making.
Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P
2013-07-10
Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Song, Yunpeng; Wu, Sen; Xu, Linyan; Fu, Xing
2015-03-10
Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM) is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke's law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%.
Systematic Studies using the UCN τ Magneto-Gravitational Trap
NASA Astrophysics Data System (ADS)
Seestrom, Susan; UCNτ Collaboration
2016-09-01
The UCN τ Experiment measures the neutron lifetime using Ultracold Neutrons (UCN) stored in a magneto-gravitational trap. The trap employs various techniques to remove neutrons whose energies are too high to be trapped. It has recently been instrumented with a novel in situ detector that can be lowered into the trap to measure the neutron population as a function of height within the trap. This has allowed us to perform a series of systematic studies aimed at understanding and quantifying potential systematic effects associated with quasi-bound neutrons and phase space evolution. We have obtained multiple sets of data each having a statistical uncertainty of about 3 sec. We will discuss the results of our studies of cleaning and phase space evolution as well as results from studies of backgrounds and normalization of the initial neutron loading.
Prediction and Validation of Mars Pathfinder Hypersonic Aerodynamic Data Base
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Braun, Robert D.; Weilmuenster, K. James; Mitcheltree, Robert A.; Engelund, Walter C.; Powell, Richard W.
1998-01-01
Postflight analysis of the Mars Pathfinder hypersonic, continuum aerodynamic data base is presented. Measured data include accelerations along the body axis and axis normal directions. Comparisons of preflight simulation and measurements show good agreement. The prediction of two static instabilities associated with movement of the sonic line from the shoulder to the nose and back was confirmed by measured normal accelerations. Reconstruction of atmospheric density during entry has an uncertainty directly proportional to the uncertainty in the predicted axial coefficient. The sensitivity of the moment coefficient to freestream density, kinetic models and center-of-gravity location are examined to provide additional consistency checks of the simulation with flight data. The atmospheric density as derived from axial coefficient and measured axial accelerations falls within the range required for sonic line shift and static stability transition as independently determined from normal accelerations.
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
NASA Astrophysics Data System (ADS)
Terando, A. J.; Reich, B. J.; Pacifici, K.
2013-12-01
Fire is an important disturbance process in many coupled natural-human systems. Changes in the frequency and severity of fires due to anthropogenic climate change could have significant costs to society and the plant and animal communities that are adapted to a particular fire regime Planning for these changes requires a robust model of the relationship between climate and fire that accounts for multiple sources of uncertainty that are present when simulating ecological and climatological processes. Here we model how anthropogenic climate change could affect the wildfire regime for a region in the Southeast US whose natural ecosystems are dependent on frequent, low-intensity fires while humans are at risk from large catastrophic fires. We develop a modeling framework that incorporates three major sources of uncertainty: (1) uncertainty in the ecological drivers of expected monthly area burned, (2) uncertainty in the environmental drivers influencing the probability of an extreme fire event, and (3) structural uncertainty in different downscaled climate models. In addition we use two policy-relevant emission scenarios (climate stabilization and 'business-as-usual') to characterize the uncertainty in future greenhouse gas forcings. We use a Bayesian framework to incorporate different sources of uncertainty including simulation of predictive errors and Stochastic Search Variable Selection. Our results suggest that although the mean process remains stationary, the probability of extreme fires declines through time, owing to the persistence of high atmospheric moisture content during the peak fire season that dampens the effect of increasing temperatures. Including multiple sources of uncertainty leads to wide prediction intervals, but is potentially more useful for decision-makers that will require adaptation strategies that are robust to rapid but uncertain climate and ecological change.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges
NASA Technical Reports Server (NTRS)
Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam
2014-01-01
As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.
Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan
2016-07-01
This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.
The contents of visual working memory reduce uncertainty during visual search.
Cosman, Joshua D; Vecera, Shaun P
2011-05-01
Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.
Hydrologic and geochemical data assimilation at the Hanford 300 Area
NASA Astrophysics Data System (ADS)
Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.
2012-12-01
In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2010-04-01
Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical issue in mitigating the effects of intrusion attacks. Published research has introduced the concept of the trustworthiness (reputation) of a single sensor node. Reputation is evaluated using an information-theoretic concept, the Kullback- Leibler (KL) distance. Reputation is added to the set of security features. In data aggregation, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As aggregate information is disseminated along routes to the sink node(s), its corresponding opinion is propagated and regulated by Josang's belief model. By applying subjective logic on the opinion to manage trust propagation, the uncertainty inherent in aggregation results can be quantified for use in decision making. The concepts of reputation and opinion are modified to allow their application to a class of dynamic WSNs. Using reputation as a factor in determining interim aggregate information is equivalent to implementation of a reputation-based security filter at each processing stage of data fusion, thereby improving the intrusion detection and identification results based on unsupervised techniques. In particular, the reputation-based version of the probabilistic neural network (PNN) learns the signature of normal network traffic with the random probability weights normally used in the PNN replaced by the trust-based quantified reputations of sensor data or subsequent aggregation results generated by the sequential implementation of a version of Josang's belief model. A two-stage, intrusion detection and identification algorithm is implemented to overcome the problems of large sensor data loads and resource restrictions in WSNs. Performance of the twostage algorithm is assessed in simulations of WSN scenarios with multiple sensors at edge nodes for known intrusion attacks. Simulation results show improved robustness of the two-stage design based on reputation-based NNs to intrusion anomalies from compromised nodes and external intrusion attacks.
Verification of forecast ensembles in complex terrain including observation uncertainty
NASA Astrophysics Data System (ADS)
Dorninger, Manfred; Kloiber, Simon
2017-04-01
Traditionally, verification means to verify a forecast (ensemble) with the truth represented by observations. The observation errors are quite often neglected arguing that they are small when compared to the forecast error. In this study as part of the MesoVICT (Mesoscale Verification Inter-comparison over Complex Terrain) project it will be shown, that observation errors have to be taken into account for verification purposes. The observation uncertainty is estimated from the VERA (Vienna Enhanced Resolution Analysis) and represented via two analysis ensembles which are compared to the forecast ensemble. For the whole study results from COSMO-LEPS provided by Arpae-SIMC Emilia-Romagna are used as forecast ensemble. The time period covers the MesoVICT core case from 20-22 June 2007. In a first step, all ensembles are investigated concerning their distribution. Several tests have been executed (Kolmogorov-Smirnov-Test, Finkelstein-Schafer Test, Chi-Square Test etc.) showing no exact mathematical distribution. So the main focus is on non-parametric statistics (e.g. Kernel density estimation, Boxplots etc.) and also the deviation between "forced" normal distributed data and the kernel density estimations. In a next step the observational deviations due to the analysis ensembles are analysed. In a first approach scores are multiple times calculated with every single ensemble member from the analysis ensemble regarded as "true" observation. The results are presented as boxplots for the different scores and parameters. Additionally, the bootstrapping method is also applied to the ensembles. These possible approaches to incorporating observational uncertainty into the computation of statistics will be discussed in the talk.
ERIC Educational Resources Information Center
Jameson, Jill
2012-01-01
The complex leadership attribute of "negative capability" in managing uncertainty and engendering trust may be amongst the qualities enabling institutions to cope with multiple recent government policy challenges affecting English higher education, including significant increases in student fees. Research findings are reported on changes…
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.
Exploring uncertainty in the Earth Sciences - the potential field perspective
NASA Astrophysics Data System (ADS)
Saltus, R. W.; Blakely, R. J.
2013-12-01
Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are possible. The mathematical label of 'non-uniqueness' can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this talk is to present a practical perspective on the theoretical non-uniqueness of potential field interpretation in geology. There are multiple ways to approach and constrain potential field studies to produce significant, robust, and definitive results. For example, a smooth, bell-shaped gravity profile, in theory, could be caused by an infinite set of physical density bodies, ranging from a deep, compact, circular source to a shallow, smoothly varying, inverted bell-shaped source. In practice, however, we can use independent geologic or geophysical information to limit the range of possible source densities and rule out many of the theoretical solutions. We can further reduce the theoretical uncertainty by careful attention to subtle anomaly details. For example, short-wavelength anomalies are a well-known and theoretically established characteristic of shallow geologic sources. The 'non-uniqueness' of potential field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.
Can reduction of uncertainties in cervix cancer brachytherapy potentially improve clinical outcome?
Nesvacil, Nicole; Tanderup, Kari; Lindegaard, Jacob C; Pötter, Richard; Kirisits, Christian
2016-09-01
The aim of this study was to quantify the impact of different types and magnitudes of dosimetric uncertainties in cervix cancer brachytherapy (BT) on tumour control probability (TCP) and normal tissue complication probability (NTCP) curves. A dose-response simulation study was based on systematic and random dose uncertainties and TCP/NTCP models for CTV and rectum. Large patient cohorts were simulated assuming different levels of dosimetric uncertainties. TCP and NTCP were computed, based on the planned doses, the simulated dose uncertainty, and an underlying TCP/NTCP model. Systematic uncertainties of 3-20% and random uncertainties with a 5-30% standard deviation per BT fraction were analysed. Systematic dose uncertainties of 5% lead to a 1% decrease/increase of TCP/NTCP, while random uncertainties of 10% had negligible impact on the dose-response curve at clinically relevant dose levels for target and OAR. Random OAR dose uncertainties of 30% resulted in an NTCP increase of 3-4% for planned doses of 70-80Gy EQD2. TCP is robust to dosimetric uncertainties when dose prescription is in the more flat region of the dose-response curve at doses >75Gy. For OARs, improved clinical outcome is expected by reduction of uncertainties via sophisticated dose delivery and treatment verification. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Charrier, Jessica G.; McFall, Alexander S.; Vu, Kennedy K.-T.; Baroi, James; Olea, Catalina; Hasson, Alam; Anastasio, Cort
2016-11-01
The dithiothreitol (DTT) assay is widely used to measure the oxidative potential of particulate matter. Results are typically presented in mass-normalized units (e.g., pmols DTT lost per minute per microgram PM) to allow for comparison among samples. Use of this unit assumes that the mass-normalized DTT response is constant and independent of the mass concentration of PM added to the DTT assay. However, based on previous work that identified non-linear DTT responses for copper and manganese, this basic assumption (that the mass-normalized DTT response is independent of the concentration of PM added to the assay) should not be true for samples where Cu and Mn contribute significantly to the DTT signal. To test this we measured the DTT response at multiple PM concentrations for eight ambient particulate samples collected at two locations in California. The results confirm that for samples with significant contributions from Cu and Mn, the mass-normalized DTT response can strongly depend on the concentration of PM added to the assay, varying by up to an order of magnitude for PM concentrations between 2 and 34 μg mL-1. This mass dependence confounds useful interpretation of DTT assay data in samples with significant contributions from Cu and Mn, requiring additional quality control steps to check for this bias. To minimize this problem, we discuss two methods to correct the mass-normalized DTT result and we apply those methods to our samples. We find that it is possible to correct the mass-normalized DTT result, although the correction methods have some drawbacks and add uncertainty to DTT analyses. More broadly, other DTT-active species might also have non-linear concentration-responses in the assay and cause a bias. In addition, the same problem of Cu- and Mn-mediated bias in mass-normalized DTT results might affect other measures of acellular redox activity in PM and needs to be addressed.
Solving Navigational Uncertainty Using Grid Cells on Robots
Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.
2010-01-01
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643
Optimization of Ocean Color Algorithms: Application to Satellite Data Merging
NASA Technical Reports Server (NTRS)
Maritorena, Stephane; Siegel, David A.; Morel, Andre
2003-01-01
The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project. The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage. The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global Ocean the merging of Ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances (L(sub WN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the (L(sub WN)(lambda)) data and, 4) it provides uncertainty estimates for the retrieved variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
The role of correlations in uncertainty quantification of transportation relevant fuel models
Fridlyand, Aleksandr; Johnson, Matthew S.; Goldsborough, S. Scott; ...
2017-02-03
Large reaction mechanisms are often used to describe the combustion behavior of transportation-relevant fuels like gasoline, where these are typically represented by surrogate blends, e.g., n-heptane/iso-octane/toluene. We describe efforts to quantify the uncertainty in the predictions of such mechanisms at realistic engine conditions, seeking to better understand the robustness of the model as well as the important reaction pathways and their impacts on combustion behavior. In this work, we examine the importance of taking into account correlations among reactions that utilize the same rate rules and those with multiple product channels on forward propagation of uncertainty by Monte Carlo simulations.more » Automated means are developed to generate the uncertainty factor assignment for a detailed chemical kinetic mechanism, by first uniquely identifying each reacting species, then sorting each of the reactions based on the rate rule utilized. Simulation results reveal that in the low temperature combustion regime for iso-octane, the majority of the uncertainty in the model predictions can be attributed to low temperature reactions of the fuel sub-mechanism. The foundational, or small-molecule chemistry (C 0-C 4) only contributes significantly to uncertainties in the predictions at the highest temperatures (Tc=900 K). Accounting for correlations between important reactions is shown to produce non-negligible differences in the estimates of uncertainty. Including correlations among reactions that use the same rate rules increases uncertainty in the model predictions, while accounting for correlations among reactions with multiple branches decreases uncertainty in some cases. Significant non-linear response is observed in the model predictions depending on how the probability distributions of the uncertain rate constants are defined.Finally, we concluded that care must be exercised in defining these probability distributions in order to reduce bias, and physically unrealistic estimates in the forward propagation of uncertainty for a range of UQ activities.« less
Uncertainty and inference in the world of paleoecological data
NASA Astrophysics Data System (ADS)
McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.
2017-12-01
Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a coarseness in statistical models of taphonomic process. Each of these features provides useful opportunities for statisticians and data-generating researchers to assess what we know about the signal and the noise in paleo data and to improve inference about past changes in ecosystem state.
Intercomparison of Models Representing Direct Shortwave Radiative Forcing by Sulfate Aerosols
NASA Technical Reports Server (NTRS)
Boucher, O.; Schwartz, S. E.; Ackerman, T. P.; Anderson, T. L.; Bergstrom, B.; Bonnel, B.; Dahlback, A.; Fouquart, Y.; Chylek, P.; Fu, Q.;
2000-01-01
The importance of aerosols as agents of climate change has recently been highlighted. However, the magnitude of aerosol forcing by scattering of shortwave radiation (direct forcing) is still very uncertain even for the relatively well characterized sulfate aerosol. A potential source of uncertainty is in the model representation of aerosol optical properties and aerosol influences on radiative transfer in the atmosphere. Although radiative transfer methods and codes have been compared in the past, these comparisons have not focused on aerosol forcing (change in net radiative flux at the top of the atmosphere). Here we report results of a project involving 12 groups using 15 models to examine radiative forcing by sulfate aerosol for a wide range of values of particle radius, aerosol optical depth, surface albedo, and solar zenith angle. Among the models that were employed were high and low spectral resolution models incorporating a variety of radiative transfer approximations as well as a line-by-line model. The normalized forcings (forcing per sulfate column burden) obtained with the several radiative transfer models were examined, and the discrepancies were characterized. All models simulate forcings of comparable amplitude and exhibit a similar dependence on input parameters. As expected for a non-light-absorbing aerosol, forcings were negative (cooling influence) except at high surface albedo combined with small solar zenith angle. The relative standard deviation of the zenith-angle-averaged normalized broadband forcing for 15 models-was 8% for particle radius near the maximum in this forcing (approx. 0.2 microns) and at low surface albedo. Somewhat greater model-to-model discrepancies were exhibited at specific solar zenith angles. Still greater discrepancies were exhibited at small particle radii and much greater discrepancies were exhibited at high surface albedos, at which the forcing changes sign; in these situations, however, the normalized forcing is quite small quite small. Discrepancies among the models arise from inaccuracies in Mie calculations, differing treatment of the angular scattering phase function, differing wavelength and angular resolution, and differing treatment of multiple scattering. These results imply the need for standardized radiative transfer methods tailored to the direct aerosol forcing problem. However, the relatively small spread in these results suggests that the uncertainty in forcing arising from the treatment of radiative forcing of a well-characterized aerosol at well-specified surface albedo is smaller than some of the other sources of uncertainty in estimates of direct forcing by anthropogenic sulfate aerosols and anthropogenic aerosols generally.
Huang, Guowen; Lee, Duncan; Scott, E Marian
2018-03-30
The long-term health effects of air pollution are often estimated using a spatio-temporal ecological areal unit study, but this design leads to the following statistical challenges: (1) how to estimate spatially representative pollution concentrations for each areal unit; (2) how to allow for the uncertainty in these estimated concentrations when estimating their health effects; and (3) how to simultaneously estimate the joint effects of multiple correlated pollutants. This article proposes a novel 2-stage Bayesian hierarchical model for addressing these 3 challenges, with inference based on Markov chain Monte Carlo simulation. The first stage is a multivariate spatio-temporal fusion model for predicting areal level average concentrations of multiple pollutants from both monitored and modelled pollution data. The second stage is a spatio-temporal model for estimating the health impact of multiple correlated pollutants simultaneously, which accounts for the uncertainty in the estimated pollution concentrations. The novel methodology is motivated by a new study of the impact of both particulate matter and nitrogen dioxide concentrations on respiratory hospital admissions in Scotland between 2007 and 2011, and the results suggest that both pollutants exhibit substantial and independent health effects. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Performance analysis of multiple PRF technique for ambiguity resolution
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.
2016-12-01
The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.
Thriving in a changing environment.
Hanna, E
2001-01-01
Undeniably, working in an environment that challenges the status quo is not without its problems. Members of the service redesign teams who participated in the West Ottawa Valley Network experience confess that many times during the process they felt a threat to their own job security along with uncertainty about their future roles and responsibilities. June Merkley points out that "working outside one's "normal comfort level" in a non-traditional leadership role has it's own set of difficulties, and dealing with multiple network members adds a dimension of complexity not encountered at the single site level." But she believes that "the challenges encountered within the network are helping to broaden the scope of job knowledge and communication skills. This can only have a direct and positive impact on the day-to-day tasks, while providing value to both the employee and employer". As a career management strategy, healthcare managers should seek out opportunities beyond their normal range of accountability, participate in activities that encourage the growth of new skills or hone skills that are a bit rusty. Often these projects will cause some anxiety or discomfort but the rewards, both short and long term, far outweigh the short-term pain.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.
Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies
NASA Astrophysics Data System (ADS)
Irving, J. C. E.; Cottaar, S.; Lekic, V.
2016-12-01
The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk modulus, its pressure derivative, and molar mass and volume, with particular attention paid to the (inherent) trade-offs between the different coefficients. We discuss our results in terms of added uncertainty to the light element composition of the outer core and the potential existence of anomalous structure near the outer core's boundaries.
Forest Management Under Uncertainty for Multiple Bird Population Objectives
Clinton T. Moore; W. Todd Plummer; Michael J. Conroy
2005-01-01
We advocate adaptive programs of decision making and monitoring for the management of forest birds when responses by populations to management, and particularly management trade-offs among populations, are uncertain. Models are necessary components of adaptive management. Under this approach, uncertainty about the behavior of a managed system is explicitly captured in...
Practitioner Representations of Environmental Uncertainty: An Application of Discriminant Analysis.
ERIC Educational Resources Information Center
Acharya, Lalit
Multiple discriminant analysis was used to analyze the structure of a perceived environmental uncertainty variable employed previously in research on public relations roles. Data came from a subset (N=229) of a national sample of public relations practitioners belonging to the Public Relations Society of America, who completed a set of scaled…
Thomas, Kevin V; Amador, Arturo; Baz-Lomba, Jose Antonio; Reid, Malcolm
2017-10-03
Wastewater-based epidemiology is an established approach for quantifying community drug use and has recently been applied to estimate population exposure to contaminants such as pesticides and phthalate plasticizers. A major source of uncertainty in the population weighted biomarker loads generated is related to estimating the number of people present in a sewer catchment at the time of sample collection. Here, the population quantified from mobile device-based population activity patterns was used to provide dynamic population normalized loads of illicit drugs and pharmaceuticals during a known period of high net fluctuation in the catchment population. Mobile device-based population activity patterns have for the first time quantified the high degree of intraday, week, and month variability within a specific sewer catchment. Dynamic population normalization showed that per capita pharmaceutical use remained unchanged during the period when static normalization would have indicated an average reduction of up to 31%. Per capita illicit drug use increased significantly during the monitoring period, an observation that was only possible to measure using dynamic population normalization. The study quantitatively confirms previous assessments that population estimates can account for uncertainties of up to 55% in static normalized data. Mobile device-based population activity patterns allow for dynamic normalization that yields much improved temporal and spatial trend analysis.
Uncertainty Evaluation of Measurements with Pyranometers and Pyrheliometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konings, Jorgen; Habte, Aron
2016-01-03
Evaluating photovoltaic (PV) cells, modules, arrays and systems performance of solar energy relies on accurate measurement of the available solar radiation resources. Solar radiation resources are measured using radiometers such as pyranometers (global horizontal irradiance) and pyrheliometers (direct normal irradiance). The accuracy of solar radiation data measured by radiometers depends not only on the specification of the instrument but also on a) the calibration procedure, b) the measurement conditions and maintenance, and c) the environmental conditions. Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This paper providesmore » guidelines and recommended procedures for estimating the uncertainty in measurements by radiometers using the Guide to the Expression of Uncertainty (GUM) Method. Special attention is paid to the concept of data availability and its link to uncertainty evaluation.« less
Traceable measurements of small forces and local mechanical properties
NASA Astrophysics Data System (ADS)
Campbellová, Anna; Valtr, Miroslav; Zůda, Jaroslav; Klapetek, Petr
2011-09-01
Measurement of local mechanical properties is an important topic in the fields of nanoscale device fabrication, thin film deposition and composite material development. Nanoindentation instruments are commonly used to study hardness and related mechanical properties at the nanoscale. However, traceability and uncertainty aspects of the measurement process often remain left aside. In this contribution, the use of a commercial nanoindentation instrument for metrology purposes will be discussed. Full instrument traceability, provided using atomic force microscope cantilevers and a mass comparator (normal force), interferometer (depth) and atomic force microscope (area function) is described. The uncertainty of the loading/unloading curve measurements will be analyzed and the resulting uncertainties for quantities, that are computed from loading curves such as hardness or elastic modulus, are studied. For this calculation a combination of uncertainty propagation law and Monte Carlo uncertainty evaluations are used.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Song, Yunpeng; Wu, Sen; Xu, Linyan; Fu, Xing
2015-01-01
Measurement of force on a micro- or nano-Newton scale is important when exploring the mechanical properties of materials in the biophysics and nanomechanical fields. The atomic force microscope (AFM) is widely used in microforce measurement. The cantilever probe works as an AFM force sensor, and the spring constant of the cantilever is of great significance to the accuracy of the measurement results. This paper presents a normal spring constant calibration method with the combined use of an electromagnetic balance and a homemade AFM head. When the cantilever presses the balance, its deflection is detected through an optical lever integrated in the AFM head. Meanwhile, the corresponding bending force is recorded by the balance. Then the spring constant can be simply calculated using Hooke’s law. During the calibration, a feedback loop is applied to control the deflection of the cantilever. Errors that may affect the stability of the cantilever could be compensated rapidly. Five types of commercial cantilevers with different shapes, stiffness, and operating modes were chosen to evaluate the performance of our system. Based on the uncertainty analysis, the expanded relative standard uncertainties of the normal spring constant of most measured cantilevers are believed to be better than 2%. PMID:25763650
Communicating Geographical Risks in Crisis Management: The Need for Research.
French, Simon; Argyris, Nikolaos; Haywood, Stephanie M; Hort, Matthew C; Smith, Jim Q
2017-10-23
In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios. © 2017 Society for Risk Analysis.
Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?
NASA Astrophysics Data System (ADS)
Harmon, M. E.
2016-12-01
There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.
Multiple-Sensor Discrimination of Closely-Spaced Objects on a Ballistic Trajectory
2015-05-18
Nominal System Architecture ..................................................................................... 8 2 Simulation Environment... architecture ........................................................................................... 8 Figure 2. Simulation environment developed...uncertainty band for one or multiple sensors within the observation architecture . Resolving targets from one sensor image to another can prove difficult
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
Disseminating the unit of mass from multiple primary realisations
NASA Astrophysics Data System (ADS)
Nielsen, Lars
2016-12-01
When a new definition of the kilogram has been adopted in 2018 as expected, the unit of mass will be realised by the watt balance method, the x-ray crystal density method or perhaps other primary methods still to be developed. So far, the standard uncertainties associated with the available primary methods are at least one order of magnitude larger than the standard uncertainty associated with mass comparisons using mass comparators, so differences in primary realisations of the kilogram are easily detected, whereas many National Metrology Institutes would have to increase their calibration and measurement capabilities (CMCs) if they were traceable to a single primary realisation. This paper presents a scheme for obtaining traceability to multiple primary realisations of the kilogram using a small group of stainless steel 1 kg weights, which are allowed to change their masses over time in a way known to be realistic, and which are calibrated and stored in air. An analysis of the scheme shows that if the relative standard uncertainties of future primary realisations are equal to the relative standard uncertainties of the present methods used to measure the Planck constant, the unit of mass can be disseminated with a standard uncertainty less than 0.015 mg, which matches the smallest CMCs currently claimed for the calibration of 1 kg weights.
NASA Astrophysics Data System (ADS)
Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.
2018-02-01
We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.
Precise measurements of droplet-droplet contact forces in quasi-2D emulsions
NASA Astrophysics Data System (ADS)
Lowensohn, Janna; Orellana, Carlos; Weeks, Eric
2015-03-01
We use microscopy to visualize a quasi-2D oil-in-water emulsion confined between two parallel slides. We then use the droplet shapes to infer the forces they exert on each other. To calibrate our force law, we set up an emulsion in a tilted sample chamber so that the droplets feel a known buoyant force. By correlating radius of the droplet and length of contacts with the buoyant forces, we validate our empirical force law. We improve upon prior work in our lab by using a high-resolution camera to image each droplet multiple times, thus providing sub-pixel resolution and reducing the noise. Our new technique identifies contact forces with only a 1% uncertainty, five times better than prior work. We demonstrate the utility of our technique by examining the normal modes of the droplet contact network in our samples.
UV missile-plume signature model
NASA Astrophysics Data System (ADS)
Roblin, Antoine; Baudoux, Pierre E.; Chervet, Patrick
2002-08-01
A new 3D radiative code is used to solve the radiative transfer equation in the UV spectral domain for a nonequilibrium and axisymmetric media such as a rocket plume composed of hot reactive gases and metallic oxide particles like alumina. Calculations take into account the dominant chemiluminescence radiation mechanism and multiple scattering effects produced by alumina particles. Plume radiative properties are studied by using a simple cylindrical media of finite length, deduced from different aerothermochemical real rocket plume afterburning zones. Assumed a log-normal size distribution of alumina particles, optical properties are calculated by using Mie theory. Due to large uncertainties of particles properties, systematic tests have been performed in order to evaluate the influence of the different input data (refractive index, particle mean geometric radius) upon the radiance field. These computations will help us to define the set of parameters which need to be known accurately in order to compare computations with radiance measurements obtained during field experiments.
[Influence of Uncertainty and Uncertainty Appraisal on Self-management in Hemodialysis Patients].
Jang, Hyung Suk; Lee, Chang Suk; Yang, Young Hee
2015-04-01
This study was done to examine the relation of uncertainty, uncertainty appraisal, and self-management in patients undergoing hemodialysis, and to identify factors influencing self-management. A convenience sample of 92 patients receiving hemodialysis was selected. Data were collected using a structured questionnaire and medical records. The collected data were analyzed using descriptive statistics, t-test, ANOVA, Pearson correlations and multiple regression analysis with the SPSS/WIN 20.0 program. The participants showed a moderate level of uncertainty with the highest score being for ambiguity among the four uncertainty subdomains. Scores for uncertainty danger or opportunity appraisals were under the mid points. The participants were found to perform a high level of self-management such as diet control, management of arteriovenous fistula, exercise, medication, physical management, measurements of body weight and blood pressure, and social activity. The self-management of participants undergoing hemodialysis showed a significant relationship with uncertainty and uncertainty appraisal. The significant factors influencing self-management were uncertainty, uncertainty opportunity appraisal, hemodialysis duration, and having a spouse. These variables explained 32.8% of the variance in self-management. The results suggest that intervention programs to reduce the level of uncertainty and to increase the level of uncertainty opportunity appraisal among patients would improve the self-management of hemodialysis patients.
Positive phase space distributions and uncertainty relations
NASA Technical Reports Server (NTRS)
Kruger, Jan
1993-01-01
In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Gao, Xueping; Liu, Yinzhu; Sun, Bowen
2018-06-05
The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.
Using cost-benefit concepts in design floods improves communication of uncertainty
NASA Astrophysics Data System (ADS)
Ganora, Daniele; Botto, Anna; Laio, Francesco; Claps, Pierluigi
2017-04-01
Flood frequency analysis, i.e. the study of the relationships between the magnitude and the rarity of high flows in a river, is the usual procedure adopted to assess flood hazard, preliminary to the plan/design of flood protection measures. It grounds on the fit of a probability distribution to the peak discharge values recorded in gauging stations and the final estimates over a region are thus affected by uncertainty, due to the limited sample availability and of the possible alternatives in terms of the probabilistic model and the parameter estimation methods used. In the last decade, the scientific community dealt with this issue by developing a number of methods to quantify such uncertainty components. Usually, uncertainty is visually represented through confidence bands, which are easy to understand, but are not yet demonstrated to be useful for design purposes: they usually disorient decision makers, as the design flood is no longer univocally defined, making the decision process undetermined. These considerations motivated the development of the uncertainty-compliant design flood estimator (UNCODE) procedure (Botto et al., 2014) that allows one to select meaningful flood design values accounting for the associated uncertainty by considering additional constraints based on cost-benefit criteria. This method suggests an explicit multiplication factor that corrects the traditional (without uncertainty) design flood estimates to incorporate the effects of uncertainty in the estimate at the same safety level. Even though the UNCODE method was developed for design purposes, it can represent a powerful and robust tool to help clarifying the effects of the uncertainty in statistical estimation. As the process produces increased design flood estimates, this outcome demonstrates how uncertainty leads to more expensive flood protection measures, or insufficiency of current defenses. Moreover, the UNCODE approach can be used to assess the "value" of data, as the costs of flood prevention can get down by reducing uncertainty with longer observed flood records. As the multiplication factor is dimensionless, some examples of application provided show how this approach allows simple comparisons of the effects of uncertainty in different catchments, helping to build ranking procedures for planning purposes. REFERENCES Botto, A., Ganora, D., Laio, F., and Claps, P.: Uncertainty compliant design flood estimation, Water Resources Research, 50, doi:10.1002/2013WR014981, 2014.
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
NASA Astrophysics Data System (ADS)
Kauffman, Chad Matthew
The temperature and precipitation that describe the norm of daily, monthly, and seasonal climate conditions are ``climate normals.'' They are usually calculated based on climate data covering a 30-year period, and updated in every 10 years. The next update will take place in year 2001. Because of the advent of the Automated Surface Observations Systems (ASOS) beginning in early 1990s and recognized temperature bias between ASOS and the conventional temperature sensors there is an uncertainty of how the ASOS data should be used to calculate the 1971-2000 temperature normal. This study examined the uncertainty and offered a method to minimize it. It showed that the ASOS bias has a measurable impact on the new 30-year temperature normal. The impact varies among stations and climate regions. Some stations with a cooling trend in ASOS temperature have a cooler normal for their temperature, while others with a warming trend have a warmer normal for temperature. These quantitative evaluations of ASOS effect for stations and regions can be used to reduce ASOS bias in temperature normals. This study also evaluated temperature normals for different length periods and compared them to the 30-year normal. It showed that the difference between the normals, is smaller in maritime climate than in continental temperate climate. In the former, the six- year normal describes a similar temperature variation as the 30-year normal does. In the latter, the 18-year normal starts to resemble the temperature variation that the 30-year normal describes. These results provide a theoretical basis for applying different normals in different regions. The study further compared temperature normal for different periods and identified a seasonal shift in climate change in the southwestern U.S. where the summer maximum temperature has shifted to a late summer month and the winter minimum temperature shifted to an early winter month in the past 30 years.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Boiselet, Aurelien; Lyon-Caen, Hélène
2016-04-01
Including faults in probabilistic seismic hazard assessment tends to increase the degree of uncertainty in the results due to the intrinsically uncertain nature of the fault data. This is especially the case in the low to moderate seismicity regions of Europe, where slow slipping faults are difficult to characterize. In order to better understand the key parameters that control the uncertainty in the fault-related hazard computations, we propose to build an analytic tool that provides a clear link between the different components of the fault-related hazard computations and their impact on the results. This will allow identifying the important parameters that need to be better constrained in order to reduce the resulting uncertainty in hazard and also provide a more hazard-oriented strategy for collecting relevant fault parameters in the field. The tool will be illustrated through the example of the West Corinth rifts fault-models. Recent work performed in the gulf has shown the complexity of the normal faulting system that is accommodating the extensional deformation of the rift. A logic-tree approach is proposed to account for this complexity and the multiplicity of scientifically defendable interpretations. At the nodes of the logic tree, different options that could be considered at each step of the fault-related seismic hazard will be considered. The first nodes represent the uncertainty in the geometries of the faults and their slip rates, which can derive from different data and methodologies. The subsequent node explores, for a given geometry/slip rate of faults, different earthquake rupture scenarios that may occur in the complex network of faults. The idea is to allow the possibility of several faults segments to break together in a single rupture scenario. To build these multiple-fault-segment scenarios, two approaches are considered: one based on simple rules (i.e. minimum distance between faults) and a second one that relies on physically-based simulations. The following nodes represents for each rupture scenario different rupture forecast models (i.e; characteristic or Gutenberg-Richter) and for a given rupture forecast, two probability models commonly used in seismic hazard assessment: poissonian or time-dependent. The final node represents an exhaustive set of ground motion prediction equations chosen in order to be compatible with the region. Finally, the expected probability of exceeding a given ground motion level is computed at each sites. Results will be discussed for a few specific localities of the West Corinth Gulf.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
Development of an Imaging Fourier Transform Spectrometer
1986-05-01
during multiple tests or concurrently applying many identical instrument systems to a single test. These difficult, expensive, and time-consuming...processes would introduce AEDC-TR-86-17 uncertainties due to nonstationary sources and instrument instability associated with multiple firings or... multiple instruments. For even moderate spatial, spectral, and temporal resolution, none of the previously mentioned approaches is reasonable. The
Cognitive-Behavioral Intervention for Worry, Uncertainty, and Insomnia for Cancer Survivors
2017-04-04
Anxiety Disorder; Worry; Uncertainty; Sleep Disorders; Insomnia; Fatigue; Pain; Depression; Cognitive-behavioral Therapy; Psychological Intervention; Esophageal Cancer; Pancreatic Cancer; Leukemia; Lung Cancer; Multiple Myeloma; Ovarian Neoplasm; Stage III or IV Cervical or Uterine Cancer; Stage IIIB, IIIC, or IV Breast Cancer; Glioblastoma Multiforme; Relapsed Lymphoma; Stage III or IV Colorectal Cancer; Stage IIIC or IV Melanoma
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
Flood resilience and uncertainty in flood risk assessment
NASA Astrophysics Data System (ADS)
Beven, K.; Leedal, D.; Neal, J.; Bates, P.; Hunter, N.; Lamb, R.; Keef, C.
2012-04-01
Flood risk assessments do not normally take account of the uncertainty in assessing flood risk. There is no requirement in the EU Floods Directive to do so. But given the generally short series (and potential non-stationarity) of flood discharges, the extrapolation to smaller exceedance potentials may be highly uncertain. This means that flood risk mapping may also be highly uncertainty, with additional uncertainties introduced by the representation of flood plain and channel geometry, conveyance and infrastructure. This suggests that decisions about flood plain management should be based on exceedance probability of risk rather than the deterministic hazard maps that are common in most EU countries. Some examples are given from 2 case studies in the UK where a framework for good practice in assessing uncertainty in flood risk mapping has been produced as part of the Flood Risk Management Research Consortium and Catchment Change Network Projects. This framework provides a structure for the communication and audit of assumptions about uncertainties.
Unified analytic representation of physical sputtering yield
NASA Astrophysics Data System (ADS)
Janev, R. K.; Ralchenko, Yu. V.; Kenmotsu, T.; Hosaka, K.
2001-03-01
Generalized energy parameter η= η( ɛ, δ) and normalized sputtering yield Ỹ(η) , where ɛ= E/ ETF and δ= Eth/ ETF, are introduced to achieve a unified representation of all available experimental and sputtering data at normal ion incidence. The sputtering data in the new Ỹ(η) representation retain their original uncertainties. The Ỹ(η) data can be fitted to a simple three-parameter analytic expression with an rms deviation of 32%, well within the uncertainties of original data. Both η and Ỹ(η) have correct physical behavior in the threshold and high-energy regions. The available theoretical data produced by the TRIM.SP code can also be represented by the same single analytic function Ỹ(η) with a similar accuracy.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Personal growth, symptoms, and uncertainty in community-residing adults with heart failure.
Overbaugh, Kristen J; Parshall, Mark B
Personal growth has not been studied extensively in heart failure (HF). To characterize personal growth in HF and its relationships with symptom burden, uncertainty, and demographic and clinical factors. Associations among personal growth, uncertainty, symptom burden, and clinical and demographic variables were examined in adult outpatients with HF using bivariate correlations and multiple regressions. Participants (N = 103; 76% male, mean age = 74 years, 97% New York Heart Association classes II and III) reported moderate levels of personal growth, uncertainty, and symptom burden. Personal growth was weakly correlated with age and symptom burden but not with other study variables. In a regression model, age, sex, ethnicity, disease severity, time since diagnosis, symptom burden, and uncertainty were not significant independent correlates of personal growth. Community-residing patients with HF report moderate personal growth that is not explained by uncertainty, symptom burden, or demographic and clinical variables. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, L; Soldner, A; Kirk, M
Purpose: The beam range uncertainty presents a special challenge for proton therapy. Novel technologies currently under development offer strategies to reduce the range uncertainty [1,2]. This work quantifies the potential advantages that could be realized by such a reduction for dosimetrically challenging chordomas at the base of skull. Therapeutic improvement was assessed by evaluating tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP). Methods: Treatment plans were made for a modulated-scanned proton delivery technique using the Eclipse treatment planning system. The prescription dose was 7920 cGy to the CTV. Three different range uncertainty scenarios were considered: 5 mm (3.5%more » of the beam range + 1 mm, representing current clinical practice, “Curr”), 2 mm (1.3%), and 1 mm (0.7%). For each of 4 patients, 3 different PTVs were defined via uniform expansion of the CTV by the value of the range uncertainty. Tumor control probability (TCP) and normal tissue complication probabilities (NTCPs) for organs-at-risk (OARs) were calculated using the Lyman-Kutcher-Burman[3] formalism and published model parameters [ref Terahara[4], quantec S10, Burman Red Journal v21 pp 123]. Our plan optimization strategy was to achieve PTV close to prescription while maintaining OAR NTCP values at or better than the Curr plan. Results: The average TCP values for the 5, 2, and 1 mm range uncertainty scenarios are 51%, 55% and 65%. The improvement in TCP for patients was between 4 and 30%, depending primarily on the proximity of the GTV to OAR. The average NTCPs for the brainstem and cord were about 4% and 1%, respectively, for all target margins. Conclusion: For base of skull chordomas, reduced target margins can substantially increase the TCP without increasing the NTCP. This work demonstrates the potential significance of a reduction in the range uncertainty for proton beams.« less
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
NASA Astrophysics Data System (ADS)
Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie
2018-05-01
One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.
Potential exposure to human prescription pharmaceutical residues from wastewater
Pharmaceuticals in the environment (PiE) pose a complicated problem, involving multiple dissimilar compounds, multiple routes of potential exposure, and a range of potentially affected organisms that span the tree of life. Key uncertainties include not knowing which of the thous...
NASA Astrophysics Data System (ADS)
Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk
2010-06-01
In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
The North American Regional Climate Change Assessment Program (NARCCAP): Status and results
NASA Astrophysics Data System (ADS)
Arritt, R.
2009-04-01
NARCCAP is an international program that is generating projections of climate change for the U.S., Canada, and northern Mexico at decision-relevant regional scales. NARCCAP uses multiple limited-area regional climate models (RCMs) nested within multiple atmosphere-ocean general circulation models (AOGCMs). The use of multiple regional and global models allows us to investigate the uncertainty in model responses to future emissions (here, the A2 SRES scenario). The project also includes global time-slice experiments at the same discretization (50 km) using the GFDL atmospheric model (AM2.1) and the NCAR atmospheric model (CAM3). Phase I of the experiment uses the regional models nested within reanalysis in order to establish uncertainty attributable to the RCMs themselves. Phase II of the project then nests the RCMs within results from the current and future runs of the AOGCMs to explore the cascade of uncertainty from the global to the regional models. Phase I has been completed and the results to be shown include findings that spectral nudging is beneficial in some regions but not in others. Phase II is nearing completion and some preliminary results will be shown.
Robust partial integrated guidance and control for missiles via extended state observer.
Wang, Qing; Ran, Maopeng; Dong, Chaoyang
2016-11-01
A novel extended state observer (ESO) based control is proposed for a class of nonlinear systems subject to multiple uncertainties, and then applied to partial integrated guidance and control (PIGC) design for a missile. The proposed control strategy incorporates both an ESO and an adaptive sliding mode control law. The multiple uncertainties are treated as an extended state of the plant, and then estimate them using the ESO and compensate for them in the control action, in real time. Based on the output of the ESO, the resulting adaptive sliding mode control law is inherently continuous and differentiable. Strict proof is given to show that the estimation error of the ESO can be arbitrarily small in a finite time. In addition, the adaptive sliding mode control law can achieve finite time convergence to a neighborhood of the origin, and the accurate expression of the convergent region is given. Finally, simulations are conducted on the planar missile-target engagement geometry. The effectiveness of the proposed control strategy in enhanced interception performance and improved robustness against multiple uncertainties are demonstrated. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Hostetter, Nathan; Gardner, Beth; Evans, Allen F.; Cramer, Bradley M.; Payton, Quinn; Collis, Ken; Roby, Daniel D.
2017-01-01
We developed a state-space mark-recapture-recovery model that incorporates multiple recovery types and state uncertainty to estimate survival of an anadromous fish species. We apply the model to a dataset of out-migrating juvenile steelhead trout (Oncorhynchus mykiss) tagged with passive integrated transponders, recaptured during outmigration, and recovered on bird colonies in the Columbia River basin (2008-2014). Recoveries on bird colonies are often ignored in survival studies because the river reach of mortality is often unknown, which we model as a form of state uncertainty. Median outmigration survival from release to the lower river (river kilometer 729 to 75) ranged from 0.27 to 0.35, depending on year. Recovery probabilities were frequently >0.20 in the first river reach following tagging, indicating that one out of five fish that died in that reach was recovered on a bird colony. Integrating dead recovery data provided increased parameter precision, estimation of where birds consumed fish, and survival estimates across larger spatial scales. More generally, these modeling approaches provide a flexible framework to integrate multiple sources of tag recovery data into mark-recapture studies.
On decentralized adaptive full-order sliding mode control of multiple UAVs.
Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin
2017-11-01
In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Collision judgment when using an augmented-vision head-mounted display device
Luo, Gang; Woods, Russell L; Peli, Eli
2016-01-01
Purpose We have developed a device to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. We conducted experiments in a virtual environment to determine if users would overestimate collision risks. Methods Given simulated scenes of walking or standing with intention to walk towards a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5× minified and 1:1) and two image types (grayscale or edge images) were tested. Results Image type did not significantly alter collision judgment (p>0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (p=0.12), but increased CE by 30% for intended walking (p<0.001). Their uncertainty was not affected by minification (p>0.25). For the patients, neither CE nor uncertainty was affected by minification (p>0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally-sighted subjects in simulated walking (p=0.03), but the two groups were not significantly different in all other conditions. Conclusion Users did not substantially overestimate collision risk, as the 5× minified images had only limited impact on collision judgments either during walking or before starting to walk. PMID:19458339
Collision judgment when using an augmented-vision head-mounted display device.
Luo, Gang; Woods, Russell L; Peli, Eli
2009-09-01
A device was developed to provide an expanded visual field to patients with tunnel vision by superimposing minified edge images of the wide scene, in which objects appear closer to the heading direction than they really are. Experiments were conducted in a virtual environment to determine whether users would overestimate collision risks. Given simulated scenes of walking or standing with intention to walk toward a given direction (intended walking) in a shopping mall corridor, participants (12 normally sighted and 7 with tunnel vision) reported whether they would collide with obstacles appearing at different offsets from variable walking paths (or intended directions), with and without the device. The collision envelope (CE), a personal space based on perceived collision judgments, and judgment uncertainty (variability of response) were measured. When the device was used, combinations of two image scales (5x minified and 1:1) and two image types (grayscale or edge images) were tested. Image type did not significantly alter collision judgment (P > 0.7). Compared to the without-device baseline, minification did not significantly change the CE of normally sighted subjects for simulated walking (P = 0.12), but increased CE by 30% for intended walking (P < 0.001). Their uncertainty was not affected by minification (P > 0.25). For the patients, neither CE nor uncertainty was affected by minification (P > 0.13) in both walking conditions. Baseline CE and uncertainty were greater for patients than normally sighted subjects in simulated walking (P = 0.03), but the two groups were not significantly different in all other conditions. Users did not substantially overestimate collision risk, as the x5 minified images had only limited impact on collision judgments either during walking or before starting to walk.
Category-based predictions: influence of uncertainty and feature associations.
Ross, B H; Murphy, G L
1996-05-01
Four experiments examined how people make inductive inferences using categories. Subjects read stories in which 2 categories were mentioned as possible identities of an object. The less likely category was varied to determine if people were using it, as well as the most likely category, in making predictions about the object. Experiment 1 showed that even when categorization uncertainty was emphasized, subjects used only 1 category as the basis for their prediction. Experiments 2-4 examined whether people would use multiple categories for making predictions when the feature to be predicted was associated to the less likely category. Multiple categories were used in this case, but only in limited circumstances; furthermore, using multiple categories in 1 prediction did not cause subjects to use them for subsequent predictions. The results increase the understanding of how categories are used in inductive inference.
NASA Astrophysics Data System (ADS)
Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua
2018-01-01
Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance level, but their temporal variation could be well modeled by using the fourth-order polynomial. Overall, this study further emphasized the importance of using multiple GCMs for studying climate change impacts on hydrology. Furthermore, the temporal variation of uncertainty sourced from GCMs should be given more attention.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Benchmarking observational uncertainties for hydrology (Invited)
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.
2013-12-01
There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
ERIC Educational Resources Information Center
Cohen, Aaron
2006-01-01
This study examined the relation between multiple commitments (organizational commitment, occupational commitment, job involvement, and group commitment), ethnicity, and cultural values (individualism/collectivism, power distance, uncertainty avoidance, and masculinity/femininity) with organizational citizenship behaviors (OCB) and in-role…
This presentation, Linking Regional Aerosol Emission Changes with Multiple Impact Measures through Direct and Cloud-Related Forcing Estimates, was given at the STAR Black Carbon 2016 Webinar Series: Accounting for Impact, Emissions, and Uncertainty.
NASA Astrophysics Data System (ADS)
Molla, Aslam Ali; Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu; Jana, Arghajit; Chatterjee, Debjit
2016-07-01
The black hole X-ray binary H1743-322 has been observed almost during every X-ray mission since the inception of X-ray astronomy. Like other black hole candidates H1743-322 is highly variable. Using a self consistent accretion flow model (TCAF), we study spectral evolution during its 2010 & 2011 outbursts by keeping model normalization fixed to a value (14.5). As model normalization depends only on mass, distance and inclination angle of the black hole so, it should be a constant. This constant allows us to calculate mass of the black hole if we keep it frozen. The only uncertainty in mass and normalization measurements comes from the uncertainty of distance and inclination angle. Here we present spectral analysis of H1743-322 during 2010 and 2011 outburst and conclude that the mass of the black hole is within a range of 9 - 13 M_Sun.
Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection
Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo
2011-01-01
Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....
Measuring the Performance and Intelligence of Systems: Proceedings of the 2002 PerMIS Workshop
NASA Technical Reports Server (NTRS)
Messina, E. R.; Meystel, A. M.
2002-01-01
Contents include the following: Performance Metrics; Performance of Multiple Agents; Performance of Mobility Systems; Performance of Planning Systems; General Discussion Panel 1; Uncertainty of Representation I; Performance of Robots in Hazardous Domains; Modeling Intelligence; Modeling of Mind; Measuring Intelligence; Grouping: A Core Procedure of Intelligence; Uncertainty in Representation II; Towards Universal Planning/Control Systems.
Modeling uncertainty in requirements engineering decision support
NASA Technical Reports Server (NTRS)
Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.
2005-01-01
One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.
Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.
2016-01-01
This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068
Wao, Hesborn; Mhaskar, Rahul; Kumar, Ambuj; Miladinovic, Branko; Guterbock, Thomas; Hozo, Iztok; Djulbegovic, Benjamin
2014-10-01
To investigate factors, which influence institutional review boards' (IRBs') decision to approve or not approve clinical studies, a nationwide vignette-based online survey of IRB members was conducted. A factorial design was used, whereby seven aspects of each hypothetical study were randomly varied in 15 phrases in each vignette to produce unique vignettes. Participants indicated the degree of study approval and described factors influencing approval decision. Qualitative responses were thematically content analyzed. Sixteen themes were obtained from 208 participants from 42 institutions. Uncertainty, adherence, study design, and harms were frequently and intensely cited to influence study approval. Analysis of two extreme subgroups (approvers vs. nonapprovers) showed that uncertainty influenced approval decisions, odds ratios (OR) = 3.5 (95% confidence interval [CI], 1.3-9.8) and OR = 3.2 (95% CI, 1.1-8.9), respectively, based on theme frequency and theme intensity, ignoring multiple observations per person. Taking into consideration multiple observations per person, similar results were obtained for uncertainty: OR = 8.9 (95% CI, 0.93-85.4). Perceived uncertainty about benefits and harms of a proposed intervention is a key driver in IRB members' approval of clinical trials. This, in turn, calls for improved standardization in the communications of information on benefits and harms in the research protocols considered by the IRBs. Published by Elsevier Inc.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
NASA Technical Reports Server (NTRS)
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Uncertainty in hydrological signatures for gauged and ungauged catchments
NASA Astrophysics Data System (ADS)
Westerberg, Ida K.; Wagener, Thorsten; Coxon, Gemma; McMillan, Hilary K.; Castellarin, Attilio; Montanari, Alberto; Freer, Jim
2016-03-01
Reliable information about hydrological behavior is needed for water-resource management and scientific investigations. Hydrological signatures quantify catchment behavior as index values, and can be predicted for ungauged catchments using a regionalization procedure. The prediction reliability is affected by data uncertainties for the gauged catchments used in prediction and by uncertainties in the regionalization procedure. We quantified signature uncertainty stemming from discharge data uncertainty for 43 UK catchments and propagated these uncertainties in signature regionalization, while accounting for regionalization uncertainty with a weighted-pooling-group approach. Discharge uncertainty was estimated using Monte Carlo sampling of multiple feasible rating curves. For each sampled rating curve, a discharge time series was calculated and used in deriving the gauged signature uncertainty distribution. We found that the gauged uncertainty varied with signature type, local measurement conditions and catchment behavior, with the highest uncertainties (median relative uncertainty ±30-40% across all catchments) for signatures measuring high- and low-flow magnitude and dynamics. Our regionalization method allowed assessing the role and relative magnitudes of the gauged and regionalized uncertainty sources in shaping the signature uncertainty distributions predicted for catchments treated as ungauged. We found that (1) if the gauged uncertainties were neglected there was a clear risk of overconditioning the regionalization inference, e.g., by attributing catchment differences resulting from gauged uncertainty to differences in catchment behavior, and (2) uncertainty in the regionalization results was lower for signatures measuring flow distribution (e.g., mean flow) than flow dynamics (e.g., autocorrelation), and for average flows (and then high flows) compared to low flows.
A taxonomy of medical uncertainties in clinical genome sequencing.
Han, Paul K J; Umstead, Kendall L; Bernhardt, Barbara A; Green, Robert C; Joffe, Steven; Koenig, Barbara; Krantz, Ian; Waterston, Leo B; Biesecker, Leslie G; Biesecker, Barbara B
2017-08-01
Clinical next-generation sequencing (CNGS) is introducing new opportunities and challenges into the practice of medicine. Simultaneously, these technologies are generating uncertainties of an unprecedented scale that laboratories, clinicians, and patients are required to address and manage. We describe in this report the conceptual design of a new taxonomy of uncertainties around the use of CNGS in health care. Interviews to delineate the dimensions of uncertainty in CNGS were conducted with genomics experts and themes were extracted in order to expand on a previously published three-dimensional taxonomy of medical uncertainty. In parallel, we developed an interactive website to disseminate the CNGS taxonomy to researchers and engage them in its continued refinement. The proposed taxonomy divides uncertainty along three axes-source, issue, and locus-and further discriminates the uncertainties into five layers with multiple domains. Using a hypothetical clinical example, we illustrate how the taxonomy can be applied to findings from CNGS and used to guide stakeholders through interpretation and implementation of variant results. The utility of the proposed taxonomy lies in promoting consistency in describing dimensions of uncertainty in publications and presentations, to facilitate research design and management of the uncertainties inherent in the implementation of CNGS.Genet Med advance online publication 19 January 2017.
A Taxonomy of Medical Uncertainties in Clinical Genome Sequencing
Han, Paul K. J.; Umstead, Kendall L.; Bernhardt, Barbara A.; Green, Robert C.; Joffe, Steven; Koenig, Barbara; Krantz, Ian; Waterston, Leo B.; Biesecker, Leslie G.; Biesecker, Barbara B.
2017-01-01
Purpose Clinical next generation sequencing (CNGS) is introducing new opportunities and challenges into the practice of medicine. Simultaneously, these technologies are generating uncertainties of unprecedented scale that laboratories, clinicians, and patients are required to address and manage. We describe in this report the conceptual design of a new taxonomy of uncertainties around the use of CNGS in health care. Methods Interviews to delineate the dimensions of uncertainty in CNGS were conducted with genomics experts, and themes were extracted in order to expand upon a previously published three-dimensional taxonomy of medical uncertainty. In parallel we developed an interactive website to disseminate the CNGS taxonomy to researchers and engage them in its continued refinement. Results The proposed taxonomy divides uncertainty along three axes: source, issue, and locus, and further discriminates the uncertainties into five layers with multiple domains. Using a hypothetical clinical example, we illustrate how the taxonomy can be applied to findings from CNGS and used to guide stakeholders through interpretation and implementation of variant results. Conclusion The utility of the proposed taxonomy lies in promoting consistency in describing dimensions of uncertainty in publications and presentations, to facilitate research design and management of the uncertainties inherent in the implementation of CNGS. PMID:28102863
Mønster, Jacob G; Samuelsson, Jerker; Kjeldsen, Peter; Rella, Chris W; Scheutz, Charlotte
2014-08-01
Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with different trace gas configurations, (2) landfill with unknown emission source locations, (3) landfill with closely located emission sources, and (4) comparing with an Fourier transform infrared spectroscopy (FTIR) instrument using multiple trace gasses for source separation. The new real-time, high precision instrument can measure methane plumes more than 1.2 km away from small sources (about 5 kg h(-1)) in urban areas with a measurement frequency allowing plume crossing at normal driving speed. The method can be used for quantification of total methane emissions from diffuse area sources down to 1 kg per hour and can be used to quantify individual sources with the right choice of wind direction and road distance. The placement of the trace gas is important for obtaining correct quantification and uncertainty of up to 36% can be incurred when the trace gas is not co-located with the methane source. Measurements made at greater distances are less sensitive to errors in trace gas placement and model calculations showed an uncertainty of less than 5% in both urban and open-country for placing the trace gas 100 m from the source, when measurements were done more than 3 km away. Using the ratio of the integrated plume concentrations of tracer gas and methane gives the most reliable results for measurements at various distances to the source, compared to the ratio of the highest concentration in the plume, the direct concentration ratio and using a Gaussian plume model. Under suitable weather and road conditions, the CRDS system can quantify the emission from different sources located close to each other using only one kind of trace gas due to the high time resolution, while the FTIR system can measure multiple trace gasses but with a lower time resolution. Copyright © 2014 Elsevier Ltd. All rights reserved.
2013-01-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. PMID:23816180
Martin, Olwenn V; Martin, Scholze; Kortenkamp, Andreas
2013-07-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment.
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Incorporating climate change into ecosystem service assessments and decisions: a review.
Runting, Rebecca K; Bryan, Brett A; Dee, Laura E; Maseyk, Fleur J F; Mandle, Lisa; Hamel, Perrine; Wilson, Kerrie A; Yetka, Kathleen; Possingham, Hugh P; Rhodes, Jonathan R
2017-01-01
Climate change is having a significant impact on ecosystem services and is likely to become increasingly important as this phenomenon intensifies. Future impacts can be difficult to assess as they often involve long timescales, dynamic systems with high uncertainties, and are typically confounded by other drivers of change. Despite a growing literature on climate change impacts on ecosystem services, no quantitative syntheses exist. Hence, we lack an overarching understanding of the impacts of climate change, how they are being assessed, and the extent to which other drivers, uncertainties, and decision making are incorporated. To address this, we systematically reviewed the peer-reviewed literature that assesses climate change impacts on ecosystem services at subglobal scales. We found that the impact of climate change on most types of services was predominantly negative (59% negative, 24% mixed, 4% neutral, 13% positive), but varied across services, drivers, and assessment methods. Although uncertainty was usually incorporated, there were substantial gaps in the sources of uncertainty included, along with the methods used to incorporate them. We found that relatively few studies integrated decision making, and even fewer studies aimed to identify solutions that were robust to uncertainty. For management or policy to ensure the delivery of ecosystem services, integrated approaches that incorporate multiple drivers of change and account for multiple sources of uncertainty are needed. This is undoubtedly a challenging task, but ignoring these complexities can result in misleading assessments of the impacts of climate change, suboptimal management outcomes, and the inefficient allocation of resources for climate adaptation. © 2016 John Wiley & Sons Ltd.
Howell, J.E.; Moore, C.T.; Conroy, M.J.; Hamrick, R.G.; Cooper, R.J.; Thackston, R.E.; Carroll, J.P.
2009-01-01
Large-scale habitat enhancement programs for birds are becoming more widespread, however, most lack monitoring to resolve uncertainties and enhance program impact over time. Georgia?s Bobwhite Quail Initiative (BQI) is a competitive, proposal-based system that provides incentives to landowners to establish habitat for northern bobwhites (Colinus virginianus). Using data from monitoring conducted in the program?s first years (1999?2001), we developed alternative hierarchical models to predict bobwhite abundance in response to program habitat modifications on local and regional scales. Effects of habitat and habitat management on bobwhite population response varied among geographical scales, but high measurement variability rendered the specific nature of these scaled effects equivocal. Under some models, BQI had positive impact at both local farm scales (1, 9 km2), particularly when practice acres were clustered, whereas other credible models indicated that bird response did not depend on spatial arrangement of practices. Thus, uncertainty about landscape-level effects of management presents a challenge to program managers who must decide which proposals to accept. We demonstrate that optimal selection decisions can be made despite this uncertainty and that uncertainty can be reduced over time, with consequent improvement in management efficacy. However, such an adaptive approach to BQI program implementation would require the reestablishment of monitoring of bobwhite abundance, an effort for which funding was discontinued in 2002. For landscape-level conservation programs generally, our approach demonstrates the value in assessing multiple scales of impact of habitat modification programs, and it reveals the utility of addressing management uncertainty through multiple decision models and system monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kranz, L.; VanKuiken, J.C.; Gillette, J.L.
1989-12-01
The STATS model, now modified to run on microcomputers, uses user- defined component uncertainties to calculate composite uncertainty distributions for systems or technologies. The program can be used to investigate uncertainties for a single technology on to compare two technologies. Although the term technology'' is used throughout the program screens, the program can accommodate very broad problem definitions. For example, electrical demand uncertainties, health risks associated with toxic material exposures, or traffic queuing delay times can be estimated. The terminology adopted in this version of STATS reflects the purpose of the earlier version, which was to aid in comparing advancedmore » electrical generating technologies. A comparison of two clean coal technologies in two power plants is given as a case study illustration. 7 refs., 35 figs., 7 tabs.« less
Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R
2018-01-01
In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.
Methods for exploring uncertainty in groundwater management predictions
Guillaume, Joseph H. A.; Hunt, Randall J.; Comunian, Alessandro; Fu, Baihua; Blakers, Rachel S; Jakeman, Anthony J.; Barreteau, Olivier; Hunt, Randall J.; Rinaudo, Jean-Daniel; Ross, Andrew
2016-01-01
Models of groundwater systems help to integrate knowledge about the natural and human system covering different spatial and temporal scales, often from multiple disciplines, in order to address a range of issues of concern to various stakeholders. A model is simply a tool to express what we think we know. Uncertainty, due to lack of knowledge or natural variability, means that there are always alternative models that may need to be considered. This chapter provides an overview of uncertainty in models and in the definition of a problem to model, highlights approaches to communicating and using predictions of uncertain outcomes and summarises commonly used methods to explore uncertainty in groundwater management predictions. It is intended to raise awareness of how alternative models and hence uncertainty can be explored in order to facilitate the integration of these techniques with groundwater management.
Visualizing uncertainty about the future.
Spiegelhalter, David; Pearson, Mike; Short, Ian
2011-09-09
We are all faced with uncertainty about the future, but we can get the measure of some uncertainties in terms of probabilities. Probabilities are notoriously difficult to communicate effectively to lay audiences, and in this review we examine current practice for communicating uncertainties visually, using examples drawn from sport, weather, climate, health, economics, and politics. Despite the burgeoning interest in infographics, there is limited experimental evidence on how different types of visualizations are processed and understood, although the effectiveness of some graphics clearly depends on the relative numeracy of an audience. Fortunately, it is increasingly easy to present data in the form of interactive visualizations and in multiple types of representation that can be adjusted to user needs and capabilities. Nonetheless, communicating deeper uncertainties resulting from incomplete or disputed knowledge--or from essential indeterminacy about the future--remains a challenge.
We introduce a hierarchical optimization framework for spatially targeting green infrastructure (GI) incentive policies in order to meet objectives related to cost and environmental effectiveness. The framework explicitly simulates the interaction between multiple levels of polic...
Bayesian methods for uncertainty factor application for derivation of reference values.
Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B
2016-10-01
In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Estimation of plant sampling uncertainty: an example based on chemical analysis of moss samples.
Dołęgowska, Sabina
2016-11-01
In order to estimate the level of uncertainty arising from sampling, 54 samples (primary and duplicate) of the moss species Pleurozium schreberi (Brid.) Mitt. were collected within three forested areas (Wierna Rzeka, Piaski, Posłowice Range) in the Holy Cross Mountains (south-central Poland). During the fieldwork, each primary sample composed of 8 to 10 increments (subsamples) was taken over an area of 10 m 2 whereas duplicate samples were collected in the same way at a distance of 1-2 m. Subsequently, all samples were triple rinsed with deionized water, dried, milled, and digested (8 mL HNO 3 (1:1) + 1 mL 30 % H 2 O 2 ) in a closed microwave system Multiwave 3000. The prepared solutions were analyzed twice for Cu, Fe, Mn, and Zn using FAAS and GFAAS techniques. All datasets were checked for normality and for normally distributed elements (Cu from Piaski, Zn from Posłowice, Fe, Zn from Wierna Rzeka). The sampling uncertainty was computed with (i) classical ANOVA, (ii) classical RANOVA, (iii) modified RANOVA, and (iv) range statistics. For the remaining elements, the sampling uncertainty was calculated with traditional and/or modified RANOVA (if the amount of outliers did not exceed 10 %) or classical ANOVA after Box-Cox transformation (if the amount of outliers exceeded 10 %). The highest concentrations of all elements were found in moss samples from Piaski, whereas the sampling uncertainty calculated with different statistical methods ranged from 4.1 to 22 %.
Facing uncertainty in ecosystem services-based resource management.
Grêt-Regamey, Adrienne; Brunner, Sibyl H; Altwegg, Jürg; Bebi, Peter
2013-09-01
The concept of ecosystem services is increasingly used as a support for natural resource management decisions. While the science for assessing ecosystem services is improving, appropriate methods to address uncertainties in a quantitative manner are missing. Ignoring parameter uncertainties, modeling uncertainties and uncertainties related to human-environment interactions can modify decisions and lead to overlooking important management possibilities. In this contribution, we present a new approach for mapping the uncertainties in the assessment of multiple ecosystem services. The spatially explicit risk approach links Bayesian networks to a Geographic Information System for forecasting the value of a bundle of ecosystem services and quantifies the uncertainties related to the outcomes in a spatially explicit manner. We demonstrate that mapping uncertainties in ecosystem services assessments provides key information for decision-makers seeking critical areas in the delivery of ecosystem services in a case study in the Swiss Alps. The results suggest that not only the total value of the bundle of ecosystem services is highly dependent on uncertainties, but the spatial pattern of the ecosystem services values changes substantially when considering uncertainties. This is particularly important for the long-term management of mountain forest ecosystems, which have long rotation stands and are highly sensitive to pressing climate and socio-economic changes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
McCaffrey, Nikki; Agar, Meera; Harlum, Janeane; Karnon, Jonathon; Currow, David; Eckermann, Simon
2015-01-01
Introduction Comparing multiple, diverse outcomes with cost-effectiveness analysis (CEA) is important, yet challenging in areas like palliative care where domains are unamenable to integration with survival. Generic multi-attribute utility values exclude important domains and non-health outcomes, while partial analyses—where outcomes are considered separately, with their joint relationship under uncertainty ignored—lead to incorrect inference regarding preferred strategies. Objective The objective of this paper is to consider whether such decision making can be better informed with alternative presentation and summary measures, extending methods previously shown to have advantages in multiple strategy comparison. Methods Multiple outcomes CEA of a home-based palliative care model (PEACH) relative to usual care is undertaken in cost disutility (CDU) space and compared with analysis on the cost-effectiveness plane. Summary measures developed for comparing strategies across potential threshold values for multiple outcomes include: expected net loss (ENL) planes quantifying differences in expected net benefit; the ENL contour identifying preferred strategies minimising ENL and their expected value of perfect information; and cost-effectiveness acceptability planes showing probability of strategies minimising ENL. Results Conventional analysis suggests PEACH is cost-effective when the threshold value per additional day at home ( 1) exceeds $1,068 or dominated by usual care when only the proportion of home deaths is considered. In contrast, neither alternative dominate in CDU space where cost and outcomes are jointly considered, with the optimal strategy depending on threshold values. For example, PEACH minimises ENL when 1=$2,000 and 2=$2,000 (threshold value for dying at home), with a 51.6% chance of PEACH being cost-effective. Conclusion Comparison in CDU space and associated summary measures have distinct advantages to multiple domain comparisons, aiding transparent and robust joint comparison of costs and multiple effects under uncertainty across potential threshold values for effect, better informing net benefit assessment and related reimbursement and research decisions. PMID:25751629
Using real options analysis to support strategic management decisions
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan
2013-12-01
Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2009-08-01
In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.
NASA Astrophysics Data System (ADS)
Gilani, Seyed-Omid; Sattarvand, Javad
2016-02-01
Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.
Stochastic production phase design for an open pit mining complex with multiple processing streams
NASA Astrophysics Data System (ADS)
Asad, Mohammad Waqar Ali; Dimitrakopoulos, Roussos; van Eldert, Jeroen
2014-08-01
In a mining complex, the mine is a source of supply of valuable material (ore) to a number of processes that convert the raw ore to a saleable product or a metal concentrate for production of the refined metal. In this context, expected variation in metal content throughout the extent of the orebody defines the inherent uncertainty in the supply of ore, which impacts the subsequent ore and metal production targets. Traditional optimization methods for designing production phases and ultimate pit limit of an open pit mine not only ignore the uncertainty in metal content, but, in addition, commonly assume that the mine delivers ore to a single processing facility. A stochastic network flow approach is proposed that jointly integrates uncertainty in supply of ore and multiple ore destinations into the development of production phase design and ultimate pit limit. An application at a copper mine demonstrates the intricacies of the new approach. The case study shows a 14% higher discounted cash flow when compared to the traditional approach.
A New Mathematical Framework for Design Under Uncertainty
2016-05-05
blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and
Paige F. B. Ferguson; Michael J. Conroy; John F. Chamblee; Jeffrey Hepinstall-Cymerman
2015-01-01
Parcelization and forest fragmentation are of concern for ecological, economic, and social reasons. Efforts to keep large, private forests intact may be supported by a decision-making process that incorporates landownersâ objectives and uncertainty. We used structured decision making (SDM) with owners of large, private forests in Macon County, North Carolina....
A probabilistic approach to aircraft design emphasizing stability and control uncertainties
NASA Astrophysics Data System (ADS)
Delaurentis, Daniel Andrew
In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.
An experimental system for the study of active vibration control - Development and modeling
NASA Astrophysics Data System (ADS)
Batta, George R.; Chen, Anning
A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Michael K.; O'Rourke, Patrick E.
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Metrology applied to ultrasound characterization of trabecular bones using the AIB parameter
NASA Astrophysics Data System (ADS)
Braz, D. S.; Silva, C. E.; Alvarenga, A. V.; Junior, D. S.; Costa-Félix, R. P. B.
2016-07-01
Apparent Integrated Backscattering (AIB) presents correlation between Apparent Backscatter Transfer Function and the transducer bandwidth. Replicas of trabecular bones (cubes of 20 mm side length) created by 3D printing technique were characterized using AIB with a 2.25 MHz center frequency transducer. A mechanical scanning system was used to acquire multiple backscatter signals. An uncertainty model in measurement was proposed based on the Guide to the Expression of Uncertainty in Measurement. Initial AIB results are not metrologically reliable, presenting high measurement uncertainties (sample: 5_0.2032/AIB: -15.1 dB ± 13.9 dB). It is noteworthy that the uncertainty model proposed contributes as unprecedented way for metrological assessment of trabecular bone characterization using AIB.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Hallifax, D; Houston, J B
2009-03-01
Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.
High cumulants of conserved charges and their statistical uncertainties
NASA Astrophysics Data System (ADS)
Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu
2017-10-01
We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)
A Bayesian method for detecting pairwise associations in compositional data
Ventz, Steffen; Huttenhower, Curtis
2017-01-01
Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance) to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats. PMID:29140991
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
NASA Technical Reports Server (NTRS)
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation problems and provide a measure of model performance which can be used in attempts to improve such models.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
Using spatial uncertainty to manipulate the size of the attention focus.
Huang, Dan; Xue, Linyan; Wang, Xin; Chen, Yao
2016-09-01
Preferentially processing behaviorally relevant information is vital for primate survival. In visuospatial attention studies, manipulating the spatial extent of attention focus is an important question. Although many studies have claimed to successfully adjust attention field size by either varying the uncertainty about the target location (spatial uncertainty) or adjusting the size of the cue orienting the attention focus, no systematic studies have assessed and compared the effectiveness of these methods. We used a multiple cue paradigm with 2.5° and 7.5° rings centered around a target position to measure the cue size effect, while the spatial uncertainty levels were manipulated by changing the number of cueing positions. We found that spatial uncertainty had a significant impact on reaction time during target detection, while the cue size effect was less robust. We also carefully varied the spatial scope of potential target locations within a small or large region and found that this amount of variation in spatial uncertainty can also significantly influence target detection speed. Our results indicate that adjusting spatial uncertainty is more effective than varying cue size when manipulating attention field size.
NASA Astrophysics Data System (ADS)
Tobin, Stephen J.; Peura, Pauli; Bélanger-Champagne, Camille; Moring, Mikael; Dendooven, Peter; Honkamaa, Tapani
2018-07-01
The performance of a passive neutron albedo reactivity (PNAR) instrument to measure neutron multiplication of spent nuclear fuel in borated water is investigated as part of an integrated non-destructive assay safeguards system. To measure the PNAR Ratio, which is proportional to the neutron multiplication, the total neutron count rate is measured in high- and low-multiplying environments by the PNAR instrument. The integrated system also contains a load cell and a passive gamma emission tomograph, and as such meets all the recommendations of the IAEA's recent ASTOR Experts Group report. A virtual spent fuel library for VVER-440 fuel was used in conjunction with MCNP simulations of the PNAR instrument to estimate the measurement uncertainties from (1) variation in the water boron content, (2) assembly positioning in the detector and (3) counting statistics. The estimated aggregate measurement uncertainty on the PNAR Ratio measurement is 0.008, to put this uncertainty in context, the difference in the PNAR Ratio between a fully irradiated assembly and this same assembly when fissile isotopes only absorb neutrons, but do not emit neutrons, is 0.106, a 13-sigma effect. The 1-sigma variation of 0.008 in the PNAR Ratio is estimated to correspond to a 3.2 GWd/tU change in assembly burnup.
Kasper, Jürgen; Köpke, Sascha; Mühlhauser, Ingrid; Heesen, Christoph
2006-07-01
This study analysis the comprehension and emotional responses of people suffering from multiple sclerosis when provided with an evidence-based information module. It is a core module of a comprehensive decision aid about immunotherapy. The core module is designed to enable patients to process scientific uncertainty without adverse effects. It considers existing standards for risk communication and presentation of data. Using a mailing approach we investigated 169 patients with differing courses of disease in a before-after design. Items addressed the competence in processing relative and absolute risk information and patients' emotional response to the tool, comprising grade of familiarity with the information, understanding, relevance, emotional arousal, and certainty. Overall, numeracy improved (p < 0.001), although 99 of 169 patients did not complete the numeracy task correctly. Understanding depended on the relevance related to the course of disease. A moderate level of uncertainty was induced. No adverse emotional responses could be shown, neither in those who did comprehend the information, nor in those who did not develop numeracy skills. In conclusion, the tool supports people suffering from multiple sclerosis to process evidence-based medical information and scientific uncertainty without burdening them emotionally. This study is an example for the documentation of an important step in the development process of a complex intervention.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Czarske, J.; Fischer, A.
2014-08-01
Optical measurement techniques offer great opportunities in diverse applications, such as lathe monitoring and microfluidics. Doppler-based interferometric techniques enable simultaneous measurement of the lateral velocity and axial distance of a moving object. However, there is a complementarity between the unambiguous axial measurement range and the uncertainty of the distance. Therefore, we present an extended sensor setup, which provides an unambiguous axial measurement range of 1 mm while achieving uncertainties below 100 nm. Measurements at a calibration system are performed. When using a pinhole for emulating a single scattering particle, the tumbling motion of the rotating object is resolved with a distance uncertainty of 50 nm. For measurements at the rough surface, the distance uncertainty amounts to 280 nm due to a lower signal-to-noise ratio. Both experimental results are close to the respective Cramér-Rao bound, which is derived analytically for both surface and single particle measurements.
MOMENTS OF UNCERTAINTY: ETHICAL CONSIDERATIONS AND EMERGING CONTAMINANTS
Cordner, Alissa; Brown, Phil
2013-01-01
Science on emerging environmental health threats involves numerous ethical concerns related to scientific uncertainty about conducting, interpreting, communicating, and acting upon research findings, but the connections between ethical decision making and scientific uncertainty are under-studied in sociology. Under conditions of scientific uncertainty, researcher conduct is not fully prescribed by formal ethical codes of conduct, increasing the importance of ethical reflection by researchers, conflicts over research conduct, and reliance on informal ethical standards. This paper draws on in-depth interviews with scientists, regulators, activists, industry representatives, and fire safety experts to explore ethical considerations of moments of uncertainty using a case study of flame retardants, chemicals widely used in consumer products with potential negative health and environmental impacts. We focus on the uncertainty that arises in measuring people’s exposure to these chemicals through testing of their personal environments or bodies. We identify four sources of ethical concerns relevant to scientific uncertainty: 1) choosing research questions or methods, 2) interpreting scientific results, 3) communicating results to multiple publics, and 4) applying results for policy-making. This research offers lessons about professional conduct under conditions of uncertainty, ethical research practice, democratization of scientific knowledge, and science’s impact on policy. PMID:24249964
Pulley, S; Collins, A L
2018-09-01
The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Optimization Of Ocean Color Algorithms: Application To Satellite And In Situ Data Merging. Chapter 9
NASA Technical Reports Server (NTRS)
Maritorena, Stephane; Siegel, David A.; Morel, Andre
2003-01-01
The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project (McClain et al., 1995). The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage (Gregg et al., 1998; Gregg & Woodward, 1998). The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global ocean the merging of ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program (see e.g. Kwiatkowska & Fargion, 2003, Franz et al., 2003). As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized waterleaving radiances (LwN( )) from single or multiple sensors and uses them in the inversion of a semianalytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the LwN( ) data and, 4) it provides uncertainty estimates for the retrieved variables.
Tellinghuisen, Joel
2016-03-01
Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program. Copyright © 2015 Elsevier Inc. All rights reserved.
Guo, P; Huang, G H
2009-01-01
In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Xie, Weihong; Yu, Yang
2017-01-01
Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062
Liang, Fan; Xie, Weihong; Yu, Yang
2017-01-01
Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively "switch" from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.
Bereby-Meyer, Yoella; Meyer, Joachim; Budescu, David V
2003-02-01
This paper assesses framing effects on decision making with internal uncertainty, i.e., partial knowledge, by focusing on examinees' behavior in multiple-choice (MC) tests with different scoring rules. In two experiments participants answered a general-knowledge MC test that consisted of 34 solvable and 6 unsolvable items. Experiment 1 studied two scoring rules involving Positive (only gains) and Negative (only losses) scores. Although answering all items was the dominating strategy for both rules, the results revealed a greater tendency to answer under the Negative scoring rule. These results are in line with the predictions derived from Prospect Theory (PT) [Econometrica 47 (1979) 263]. The second experiment studied two scoring rules, which allowed respondents to exhibit partial knowledge. Under the Inclusion-scoring rule the respondents mark all answers that could be correct, and under the Exclusion-scoring rule they exclude all answers that might be incorrect. As predicted by PT, respondents took more risks under the Inclusion rule than under the Exclusion rule. The results illustrate that the basic process that underlies choice behavior under internal uncertainty and especially the effect of framing is similar to the process of choice under external uncertainty and can be described quite accurately by PT. Copyright 2002 Elsevier Science B.V.
Estimation of Confidence Intervals for Multiplication and Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J
2009-07-17
Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoreticalmore » count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.« less
Qi, Haiping; Lorenz, Jennifer M.; Coplen, Tyler B.; Tarbox, Lauren V.; Mayer, Bernhard; Taylor, Steve
2014-01-01
RESULTS: The δ2H and δ18O values of this reference water are –150.2 ± 0.5 ‰ and –19.80 ± 0.02 ‰, respectively, relative to VSMOW on scales normalized such that the δ2H and δ18O values of SLAP reference water are, respectively, –428 and –55.5 ‰. Each uncertainty is an estimated expanded uncertainty (U = 2uc) about the reference value that provides an interval that has about a 95-percent probability of encompassing the true value. CONCLUSION: This isotopic reference material, designated as USGS47, is intended as one of two isotopic reference waters for daily normalization of stable hydrogen and stable oxygen isotopic analysis of water with a mass spectrometer or a laser absorption spectrometer. "
USDA-ARS?s Scientific Manuscript database
Quantification of regional greenhouse gas (GHG) fluxes is essential for establishing mitigation strategies and evaluating their effectiveness. Here, we used multiple top-down approaches and multiple trace gas observations at a tall tower to estimate GHG regional fluxes and evaluate the GHG fluxes de...
Neural Mechanisms of Updating under Reducible and Irreducible Uncertainty.
Kobayashi, Kenji; Hsu, Ming
2017-07-19
Adaptive decision making depends on an agent's ability to use environmental signals to reduce uncertainty. However, because of multiple types of uncertainty, agents must take into account not only the extent to which signals violate prior expectations but also whether uncertainty can be reduced in the first place. Here we studied how human brains of both sexes respond to signals under conditions of reducible and irreducible uncertainty. We show behaviorally that subjects' value updating was sensitive to the reducibility of uncertainty, and could be quantitatively characterized by a Bayesian model where agents ignore expectancy violations that do not update beliefs or values. Using fMRI, we found that neural processes underlying belief and value updating were separable from responses to expectancy violation, and that reducibility of uncertainty in value modulated connections from belief-updating regions to value-updating regions. Together, these results provide insights into how agents use knowledge about uncertainty to make better decisions while ignoring mere expectancy violation. SIGNIFICANCE STATEMENT To make good decisions, a person must observe the environment carefully, and use these observations to reduce uncertainty about consequences of actions. Importantly, uncertainty should not be reduced purely based on how surprising the observations are, particularly because in some cases uncertainty is not reducible. Here we show that the human brain indeed reduces uncertainty adaptively by taking into account the nature of uncertainty and ignoring mere surprise. Behaviorally, we show that human subjects reduce uncertainty in a quasioptimal Bayesian manner. Using fMRI, we characterize brain regions that may be involved in uncertainty reduction, as well as the network they constitute, and dissociate them from brain regions that respond to mere surprise. Copyright © 2017 the authors 0270-6474/17/376972-11$15.00/0.
Neural Mechanisms of Updating under Reducible and Irreducible Uncertainty
2017-01-01
Adaptive decision making depends on an agent's ability to use environmental signals to reduce uncertainty. However, because of multiple types of uncertainty, agents must take into account not only the extent to which signals violate prior expectations but also whether uncertainty can be reduced in the first place. Here we studied how human brains of both sexes respond to signals under conditions of reducible and irreducible uncertainty. We show behaviorally that subjects' value updating was sensitive to the reducibility of uncertainty, and could be quantitatively characterized by a Bayesian model where agents ignore expectancy violations that do not update beliefs or values. Using fMRI, we found that neural processes underlying belief and value updating were separable from responses to expectancy violation, and that reducibility of uncertainty in value modulated connections from belief-updating regions to value-updating regions. Together, these results provide insights into how agents use knowledge about uncertainty to make better decisions while ignoring mere expectancy violation. SIGNIFICANCE STATEMENT To make good decisions, a person must observe the environment carefully, and use these observations to reduce uncertainty about consequences of actions. Importantly, uncertainty should not be reduced purely based on how surprising the observations are, particularly because in some cases uncertainty is not reducible. Here we show that the human brain indeed reduces uncertainty adaptively by taking into account the nature of uncertainty and ignoring mere surprise. Behaviorally, we show that human subjects reduce uncertainty in a quasioptimal Bayesian manner. Using fMRI, we characterize brain regions that may be involved in uncertainty reduction, as well as the network they constitute, and dissociate them from brain regions that respond to mere surprise. PMID:28626019
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
Generating Vegetation Leaf Area Index Earth System Data Record from Multiple Sensors. Part 1; Theory
NASA Technical Reports Server (NTRS)
Ganguly, Sangram; Schull, Mitchell A.; Samanta, Arindam; Shabanov, Nikolay V.; Milesi, Cristina; Nemani, Ramakrishna R.; Knyazikhin, Yuri; Myneni, Ranga B.
2008-01-01
The generation of multi-decade long Earth System Data Records (ESDRs) of Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) from remote sensing measurements of multiple sensors is key to monitoring long-term changes in vegetation due to natural and anthropogenic influences. Challenges in developing such ESDRs include problems in remote sensing science (modeling of variability in global vegetation, scaling, atmospheric correction) and sensor hardware (differences in spatial resolution, spectral bands, calibration, and information content). In this paper, we develop a physically based approach for deriving LAI and FPAR products from the Advanced Very High Resolution Radiometer (AVHRR) data that are of comparable quality to the Moderate resolution Imaging Spectroradiometer (MODIS) LAI and FPAR products, thus realizing the objective of producing a long (multi-decadal) time series of these products. The approach is based on the radiative transfer theory of canopy spectral invariants which facilitates parameterization of the canopy spectral bidirectional reflectance factor (BRF). The methodology permits decoupling of the structural and radiometric components and obeys the energy conservation law. The approach is applicable to any optical sensor, however, it requires selection of sensor-specific values of configurable parameters, namely, the single scattering albedo and data uncertainty. According to the theory of spectral invariants, the single scattering albedo is a function of the spatial scale, and thus, accounts for the variation in BRF with sensor spatial resolution. Likewise, the single scattering albedo accounts for the variation in spectral BRF with sensor bandwidths. The second adjustable parameter is data uncertainty, which accounts for varying information content of the remote sensing measurements, i.e., Normalized Difference Vegetation Index (NDVI, low information content), vs. spectral BRF (higher information content). Implementation of this approach indicates good consistency in LAI values retrieved from NDVI (AVHRRmode) and spectral BRF (MODIS-mode). Specific details of the implementation and evaluation of the derived products are detailed in the second part of this two-paper series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis
2017-08-01
The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less
Reducing uncertainties for short lived cumulative fission product yields
Stave, Sean; Prinke, Amanda; Greenwood, Larry; ...
2015-09-05
Uncertainties associated with short lived (halflives less than 1 day) fission product yields listed in databases such as the National Nuclear Data Center’s ENDF/B-VII are large enough for certain isotopes to provide an opportunity for new precision measurements to offer significant uncertainty reductions. A series of experiments has begun where small samples of 235U are irradiated with a pulsed, fission neutron spectrum at the Nevada National Security Site and placed between two broad-energy germanium detectors. The amount of various isotopes present immediately following the irradiation can be determined given the total counts and the calibrated properties of the detector system.more » The uncertainty on the fission yields for multiple isotopes has been reduced by nearly an order of magnitude.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matney, J; Lian, J; Chera, B
2015-06-15
Introduction: Geometric uncertainties in daily patient setup can lead to variations in the planned dose, especially when using highly conformal techniques such as helical Tomotherapy. To account for the potential effect of geometric uncertainty, our clinical practice is to expand critical structures by 3mm expansion into planning risk volumes (PRV). The PRV concept assumes the spatial dose cloud is insensitive to patient positioning. However, no tools currently exist to determine if a Tomotherapy plan is robust to the effects of daily setup variation. We objectively quantified the impact of geometric uncertainties on the 3D doses to critical normal tissues duringmore » helical Tomotherapy. Methods: Using a Matlab-based program created and validated by Accuray (Madison, WI), the planned Tomotherapy delivery sinogram recalculated dose on shifted CT datasets. Ten head and neck patients were selected for analysis. To simulate setup uncertainty, the patient anatomy was shifted ±3mm in the longitudinal, lateral and vertical axes. For each potential shift, the recalculated doses to various critical normal tissues were compared to the doses delivered to the PRV in the original plan Results: 18 shifted scenarios created from Tomotherapy plans for three patients with head and neck cancers were analyzed. For all simulated setup errors, the maximum doses to the brainstem, spinal cord, parotids and cochlea were no greater than 0.6Gy of the respective original PRV maximum. Despite 3mm setup shifts, the minimum dose delivered to 95% of the CTVs and PTVs were always within 0.4Gy of the original plan. Conclusions: For head and neck sites treated with Tomotherapy, the use of a 3mm PRV expansion provide a reasonable estimate of the dosimetric effects of 3mm setup uncertainties. Similarly, target coverage appears minimally effected by a 3mm setup uncertainty. Data from a larger number of patients will be presented. Future work will include other anatomical sites.« less
Modeling extreme hurricane damage in the United States using generalized Pareto distribution
NASA Astrophysics Data System (ADS)
Dey, Asim Kumer
Extreme value distributions are used to understand and model natural calamities, man made catastrophes and financial collapses. Extreme value theory has been developed to study the frequency of such events and to construct a predictive model so that one can attempt to forecast the frequency of a disaster and the amount of damage from such a disaster. In this study, hurricane damages in the United States from 1900-2012 have been studied. The aim of the paper is three-fold. First, normalizing hurricane damage and fitting an appropriate model for the normalized damage data. Secondly, predicting the maximum economic damage from a hurricane in future by using the concept of return period. Finally, quantifying the uncertainty in the inference of extreme return levels of hurricane losses by using a simulated hurricane series, generated by bootstrap sampling. Normalized hurricane damage data are found to follow a generalized Pareto distribution. tion. It is demonstrated that standard deviation and coecient of variation increase with the return period which indicates an increase in uncertainty with model extrapolation.
NASA Technical Reports Server (NTRS)
Miller, C. G., III; Wilder, S. E.
1972-01-01
Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.
NASA Astrophysics Data System (ADS)
Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri
2014-05-01
Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.
Optimal radiotherapy dose schedules under parametric uncertainty
NASA Astrophysics Data System (ADS)
Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin
2016-01-01
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.
Uncertainty in quantum mechanics: faith or fantasy?
Penrose, Roger
2011-12-13
The word 'uncertainty', in the context of quantum mechanics, usually evokes an impression of an essential unknowability of what might actually be going on at the quantum level of activity, as is made explicit in Heisenberg's uncertainty principle, and in the fact that the theory normally provides only probabilities for the results of quantum measurement. These issues limit our ultimate understanding of the behaviour of things, if we take quantum mechanics to represent an absolute truth. But they do not cause us to put that very 'truth' into question. This article addresses the issue of quantum 'uncertainty' from a different perspective, raising the question of whether this term might be applied to the theory itself, despite its unrefuted huge success over an enormously diverse range of observed phenomena. There are, indeed, seeming internal contradictions in the theory that lead us to infer that a total faith in it at all levels of scale leads us to almost fantastical implications.
First Observation of a Baryonic Bs0 Decay
NASA Astrophysics Data System (ADS)
Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Baszczyk, M.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bitadze, A.; Bizzeti, A.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bonivento, W.; Bordyuzhin, I.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britton, T.; Brodzicka, J.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dungs, K.; Durante, P.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Déléage, N.; Easo, S.; Ebert, M.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fay, R.; Fazzini, D.; Ferguson, D.; Fernandez, G.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Garsed, P. J.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianı, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hatch, M.; He, J.; Head, T.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hombach, C.; Hopchev, P. H.; Huard, Z.-C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kecke, M.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Komarov, I.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, T.; Li, Y.; Li, Z.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Liu, X.; Loh, D.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Maddock, B.; Maev, O.; Maguire, K.; Malde, S.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Manning, P.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Meadows, B.; Meier, F.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Minard, M.-N.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Mussini, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Pappenheimer, C.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Poikela, T.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Popovici, B.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, C.; Qian, W.; Quagliani, R.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Gonzalo, D.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubert, K.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stevens, H.; Stoica, S.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Topp-Joergensen, S.; Toriello, F.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Tourneur, S.; Trabelsi, K.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M. A.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.; LHCb Collaboration
2017-07-01
We report the first observation of a baryonic Bs0 decay, Bs0→p Λ ¯K- , using proton-proton collision data recorded by the LHCb experiment at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb-1. The branching fraction is measured to be B (Bs0→p Λ ¯ K- )+B (Bs0→p ¯ Λ K+ )=[5.46 ±0.61 ±0.57 ±0.50 (B )±0.32 (fs/fd)] ×10-6 , where the first uncertainty is statistical and the second systematic, the third uncertainty accounts for the experimental uncertainty on the branching fraction of the B0→p Λ ¯π- decay used for normalization, and the fourth uncertainty relates to the knowledge of the ratio of b -quark hadronization probabilities fs/fd.
Atlas of the spectrum of a platinum/neon hollow-cathode reference lamp in the region 1130-4330 A
NASA Technical Reports Server (NTRS)
Sansonetti, Jean E.; Reader, Joseph; Sansonetti, Craig J.; Acquista, Nicolo
1992-01-01
The spectrum of a platinum hollow-cathode lamp containing neon carrier gas was recorded photographically and photoelectrically with a 10.7 m normal-incidence vacuum spectrograph. Wavelengths and intensities were determined for about 5600 lines in the region 1130-4330 A. An atlas of the spectrum is given, with the spectral lines marked and their intensities, wavelengths, and classifications listed. Lines of impurity species are also identified. The uncertainty of the photographically measured wavelengths is estimated to be +/- 0.0020 A. The uncertainty of lines measured in the photoelectric scans is 0.01 A for wavelengths shorter than 2030 A and 0.02 A for longer wavelengths. Ritz-type wavelengths are given for many of the classified lines of Pt II with uncertainties varying from +/- 0.0004 to +/- 0.0025 A. The uncertainty of the relative intensities is estimated to be about 20 percent.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Enhanced disease characterization through multi network functional normalization in fMRI.
Çetin, Mustafa S; Khullar, Siddharth; Damaraju, Eswar; Michael, Andrew M; Baum, Stefi A; Calhoun, Vince D
2015-01-01
Conventionally, structural topology is used for spatial normalization during the pre-processing of fMRI. The co-existence of multiple intrinsic networks which can be detected in the resting brain are well-studied. Also, these networks exhibit temporal and spatial modulation during cognitive task vs. rest which shows the existence of common spatial excitation patterns between these identified networks. Previous work (Khullar et al., 2011) has shown that structural and functional data may not have direct one-to-one correspondence and functional activation patterns in a well-defined structural region can vary across subjects even for a well-defined functional task. The results of this study and the existence of the neural activity patterns in multiple networks motivates us to investigate multiple resting-state networks as a single fusion template for functional normalization for multi groups of subjects. We extend the previous approach (Khullar et al., 2011) by co-registering multi group of subjects (healthy control and schizophrenia patients) and by utilizing multiple resting-state networks (instead of just one) as a single fusion template for functional normalization. In this paper we describe the initial steps toward using multiple resting-state networks as a single fusion template for functional normalization. A simple wavelet-based image fusion approach is presented in order to evaluate the feasibility of combining multiple functional networks. Our results showed improvements in both the significance of group statistics (healthy control and schizophrenia patients) and the spatial extent of activation when a multiple resting-state network applied as a single fusion template for functional normalization after the conventional structural normalization. Also, our results provided evidence that the improvement in significance of group statistics lead to better accuracy results for classification of healthy controls and schizophrenia patients.
NASA Astrophysics Data System (ADS)
Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank
2016-05-01
Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan
2016-05-31
Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible andmore » color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2% for global horizontal irradiance (GHI), and 2.9% for DNI (for GHI and DNI over 300 W/m2) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.« less
NASA Astrophysics Data System (ADS)
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
NASA Astrophysics Data System (ADS)
Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea
2017-11-01
Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farah, J., E-mail: jad.farah@irsn.fr; Clairand, I.; Huet, C.
2015-07-15
Purpose: To investigate the optimal use of XR-RV3 GafChromic{sup ®} films to assess patient skin dose in interventional radiology while addressing the means to reduce uncertainties in dose assessment. Methods: XR-Type R GafChromic films have been shown to represent the most efficient and suitable solution to determine patient skin dose in interventional procedures. As film dosimetry can be associated with high uncertainty, this paper presents the EURADOS WG 12 initiative to carry out a comprehensive study of film characteristics with a multisite approach. The considered sources of uncertainties include scanner, film, and fitting-related errors. The work focused on studying filmmore » behavior with clinical high-dose-rate pulsed beams (previously unavailable in the literature) together with reference standard laboratory beams. Results: First, the performance analysis of six different scanner models has shown that scan uniformity perpendicular to the lamp motion axis and that long term stability are the main sources of scanner-related uncertainties. These could induce errors of up to 7% on the film readings unless regularly checked and corrected. Typically, scan uniformity correction matrices and reading normalization to the scanner-specific and daily background reading should be done. In addition, the analysis on multiple film batches has shown that XR-RV3 films have generally good uniformity within one batch (<1.5%), require 24 h to stabilize after the irradiation and their response is roughly independent of dose rate (<5%). However, XR-RV3 films showed large variations (up to 15%) with radiation quality both in standard laboratory and in clinical conditions. As such, and prior to conducting patient skin dose measurements, it is mandatory to choose the appropriate calibration beam quality depending on the characteristics of the x-ray systems that will be used clinically. In addition, yellow side film irradiations should be preferentially used since they showed a lower dependence on beam parameters compared to white side film irradiations. Finally, among the six different fit equations tested in this work, typically used third order polynomials and more rational and simplistic equations, of the form dose inversely proportional to pixel value, were both found to provide satisfactory results. Fitting-related uncertainty was clearly identified as a major contributor to the overall film dosimetry uncertainty with up to 40% error on the dose estimate. Conclusions: The overall uncertainty associated with the use of XR-RV3 films to determine skin dose in the interventional environment can realistically be estimated to be around 20% (k = 1). This uncertainty can be reduced to within 5% if carefully monitoring scanner, film, and fitting-related errors or it can easily increase to over 40% if minimal care is not taken. This work demonstrates the importance of appropriate calibration, reading, fitting, and other film-related and scan-related processes, which will help improve the accuracy of skin dose measurements in interventional procedures.« less
Cancer Risk Assessment for Space Radiation
NASA Technical Reports Server (NTRS)
Richmond, Robert C.; Cruz, Angela; Bors, Karen; Curreri, Peter A. (Technical Monitor)
2001-01-01
Predicting the occurrence of human cancer following exposure to any agent causing genetic damage is a difficult task. This is because the uncertainty of uniform exposure to the damaging agent, and the uncertainty of uniform processing of that damage within a complex set of biological variables, degrade the confidence of predicting the delayed expression of cancer as a relatively rare event within any given clinically normal individual. The radiation health research priorities for enabling long-duration human exploration of space were established in the 1996 NRC Report entitled 'Radiation Hazards to Crews of Interplanetary Missions: Biological Issues and Research Strategies'. This report emphasized that a 15-fold uncertainty in predicting radiation-induced cancer incidence must be reduced before NASA can commit humans to extended interplanetary missions. That report concluded that the great majority of this uncertainty is biologically based, while a minority is physically based due to uncertainties in radiation dosimetry and radiation transport codes. Since that report, the biologically based uncertainty has remained large, and the relatively small uncertainty associated with radiation dosimetry has increased due to the considerations raised by concepts of microdosimetry. In a practical sense, however, the additional uncertainties introduced by microdosimetry are encouraging since they are in a direction of lowered effective dose absorbed through infrequent interactions of any given cell with the high energy particle component of space radiation. Additional information is contained in the original extended abstract.
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
NASA Technical Reports Server (NTRS)
Mueller, James L.
2001-01-01
This Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) contract supports acquisition of match up radiometric and bio-optical data for validation of Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) and other ocean color satellites, and evaluation of uncertainty budgets and protocols for in situ measurements of normalized water leaving radiances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Chantell Lynne-Marie
Traditional nuclear materials accounting does not work well for safeguards when applied to pyroprocessing. Alternate methods such as Signature Based Safeguards (SBS) are being investigated. The goal of SBS is real-time/near-real-time detection of anomalous events in the pyroprocessing facility as they could indicate loss of special nuclear material. In high-throughput reprocessing facilities, metric tons of separated material are processed that must be accounted for. Even with very low uncertainties of accountancy measurements (<0.1%) the uncertainty of the material balances is still greater than the desired level. Novel contributions of this work are as follows: (1) significant enhancement of SBS developmentmore » for the salt cleanup process by creating a new gas sparging process model, selecting sensors to monitor normal operation, identifying safeguards-significant off-normal scenarios, and simulating those off-normal events and generating sensor output; (2) further enhancement of SBS development for the electrorefiner by simulating off-normal events caused by changes in salt concentration and identifying which conditions lead to Pu and Cm not tracking throughout the rest of the system; and (3) new contribution in applying statistical techniques to analyze the signatures gained from these two models to help draw real-time conclusions on anomalous events.« less
Impact of measurement uncertainty from experimental load distribution factors on bridge load rating
NASA Astrophysics Data System (ADS)
Gangone, Michael V.; Whelan, Matthew J.
2018-03-01
Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Constrained sampling experiments reveal principles of detection in natural scenes.
Sebastian, Stephen; Abrams, Jared; Geisler, Wilson S
2017-07-11
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
NASA Astrophysics Data System (ADS)
Leifer, Ira; Melton, Christopher; Fischer, Marc L.; Fladeland, Matthew; Frash, Jason; Gore, Warren; Iraci, Laura T.; Marrero, Josette E.; Ryoo, Ju-Mee; Tanaka, Tomoaki; Yates, Emma L.
2018-03-01
Methane (CH4) inventory uncertainties are large, requiring robust emission derivation approaches. We report on a fused airborne-surface data collection approach to derive emissions from an active oil field near Bakersfield, central California. The approach characterizes the atmosphere from the surface to above the planetary boundary layer (PBL) and combines downwind trace gas concentration anomaly (plume) above background with normal winds to derive flux. This approach does not require a well-mixed PBL; allows explicit, data-based, uncertainty evaluation; and was applied to complex topography and wind flows. In situ airborne (collected by AJAX - the Alpha Jet Atmospheric eXperiment) and mobile surface (collected by AMOG - the AutoMObile trace Gas - Surveyor) data were collected on 19 August 2015 to assess source strength. Data included an AMOG and AJAX intercomparison transect profiling from the San Joaquin Valley (SJV) floor into the Sierra Nevada (0.1-2.2 km altitude), validating a novel surface approach for atmospheric profiling by leveraging topography. The profile intercomparison found good agreement in multiple parameters for the overlapping altitude range from 500 to 1500 m for the upper 5 % of surface winds, which accounts for wind-impeding structures, i.e., terrain, trees, buildings, etc. Annualized emissions from the active oil fields were 31.3 ± 16 Gg methane and 2.4 ± 1.2 Tg carbon dioxide. Data showed the PBL was not well mixed at distances of 10-20 km downwind, highlighting the importance of the experimental design.
Diversity in Current Ecological Thinking: Implications for Environmental Management
NASA Astrophysics Data System (ADS)
Moore, Susan A.; Wallington, Tabatha J.; Hobbs, Richard J.; Ehrlich, Paul R.; Holling, C. S.; Levin, Simon; Lindenmayer, David; Pahl-Wostl, Claudia; Possingham, Hugh; Turner, Monica G.; Westoby, Mark
2009-01-01
Current ecological thinking emphasizes that systems are complex, dynamic, and unpredictable across space and time. What is the diversity in interpretation of these ideas among today’s ecologists, and what does this mean for environmental management? This study used a Policy Delphi survey of ecologists to explore their perspectives on a number of current topics in ecology. The results showed general concurrence with nonequilibrium views. There was agreement that disturbance is a widespread, normal feature of ecosystems with historically contingent responses. The importance of recognizing multiple levels of organization and the role of functional diversity in environmental change were also widely acknowledged. Views differed regarding the predictability of successional development, whether “patchiness” is a useful concept, and the benefits of shifting the focus from species to ecosystem processes. Because of their centrality to environmental management, these different views warrant special attention from both managers and ecologists. Such divergence is particularly problematic given widespread concerns regarding the poor linkages between science (here, ecology) and environmental policy and management, which have been attributed to scientific uncertainty and a lack of consensus among scientists, both jeopardizing the transfer of science into management. Several suggestions to help managers deal with these differences are provided, especially the need to interpret broader theory in the context of place-based assessments. The uncertainty created by these differences requires a proactive approach to environmental management, including clearly identifying environmental objectives, careful experimental design, and effective monitoring.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Wildfire Decision Making Under Uncertainty
NASA Astrophysics Data System (ADS)
Thompson, M.
2013-12-01
Decisions relating to wildfire management are subject to multiple sources of uncertainty, and are made by a broad range of individuals, across a multitude of environmental and socioeconomic contexts. In this presentation I will review progress towards identification and characterization of uncertainties and how this information can support wildfire decision-making. First, I will review a typology of uncertainties common to wildfire management, highlighting some of the more salient sources of uncertainty and how they present challenges to assessing wildfire risk. This discussion will cover the expanding role of burn probability modeling, approaches for characterizing fire effects, and the role of multi-criteria decision analysis, and will provide illustrative examples of integrated wildfire risk assessment across a variety of planning scales. Second, I will describe a related uncertainty typology that focuses on the human dimensions of wildfire management, specifically addressing how social, psychological, and institutional factors may impair cost-effective risk mitigation. This discussion will encompass decision processes before, during, and after fire events, with a specific focus on active management of complex wildfire incidents. An improved ability to characterize uncertainties faced in wildfire management could lead to improved delivery of decision support, targeted communication strategies, and ultimately to improved wildfire management outcomes.
Phu, Jack; Kalloniatis, Michael; Khuu, Sieu K.
2018-01-01
Purpose Current clinical perimetric test paradigms present stimuli randomly to various locations across the visual field (VF), inherently introducing spatial uncertainty, which reduces contrast sensitivity. In the present study, we determined the extent to which spatial uncertainty affects contrast sensitivity in glaucoma patients by minimizing spatial uncertainty through attentional cueing. Methods Six patients with open-angle glaucoma and six healthy subjects underwent laboratory-based psychophysical testing to measure contrast sensitivity at preselected locations at two eccentricities (9.5° and 17.5°) with two stimulus sizes (Goldmann sizes III and V) under different cueing conditions: 1, 2, 4, or 8 points verbally cued. Method of Constant Stimuli and a single-interval forced-choice procedure were used to generate frequency of seeing (FOS) curves at locations with and without VF defects. Results At locations with VF defects, cueing minimizes spatial uncertainty and improves sensitivity under all conditions. The effect of cueing was maximal when one point was cued, and rapidly diminished when more points were cued (no change to baseline with 8 points cued). The slope of the FOS curve steepened with reduced spatial uncertainty. Locations with normal sensitivity in glaucomatous eyes had similar performance to that of healthy subjects. There was a systematic increase in uncertainty with the depth of VF loss. Conclusions Sensitivity measurements across the VF are negatively affected by spatial uncertainty, which increases with greater VF loss. Minimizing uncertainty can improve sensitivity at locations of deficit. Translational Relevance Current perimetric techniques introduce spatial uncertainty and may therefore underestimate sensitivity in regions of VF loss. PMID:29600116
Six Degree-of-Freedom Entry Dispersion Analysis for the METEOR Recovery Module
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Braun, Robert D.; Powell, Richard W.; Engelund, Walter C.; Tartabini, Paul V.
1996-01-01
The present study performs a six degree-of-freedom entry dispersion analysis for the Multiple Experiment Transporter to Earth Orbit and Return (METEOR) mission. METEOR offered the capability of flying a recoverable science package in a microgravity environment. However, since the Recovery Module has no active control system, an accurate determination of the splashdown position is difficult because no opportunity exists to remove any errors. Hence, uncertainties in the initial conditions prior to deorbit burn initiation, during deorbit burn and exo-atmospheric coast phases, and during atmospheric flight impact the splashdown location. This investigation was undertaken to quantify the impact of the various exo-atmospheric and atmospheric uncertainties. Additionally, a Monte-Carlo analysis was performed to statistically assess the splashdown dispersion footprint caused by the multiple mission uncertainties. The Monte-Carlo analysis showed that a 3-sigma splashdown dispersion footprint with axes of 43.3 nm (long), -33.5 nm (short), and 10.0 nm (crossrange) can be constructed. A 58% probability exists that the Recovery Module will overshoot the nominal splashdown site.
Gluon shadowing and hadron production in heavy-ion collisions at LHC
NASA Astrophysics Data System (ADS)
Deng, Wei-Tian; Wang, Xin-Nian; Xu, Rong
2011-06-01
The recently published first measurement of charged hadron multiplicity density at mid-rapidity dNch / dη = 1584 ± 4 (stat.) ± 76 (sys.) in central Pb + Pb collisions at √{ s} = 2.76 TeV by the ALICE experiment at LHC is in good agreement with the HIJING2.0 prediction within the experimental errors and theoretical uncertainties. The new data point is used to carry out a combined fit together with the RHIC data to reduce the uncertainty in the gluon shadowing parameter sg which controls the overall magnitude of gluon shadowing at small fractional momentum x in HIJING2.0 model. Predictions on the centrality dependence of charged hadron multiplicity density at mid-rapidity with reduced uncertainties are given for Pb + Pb collisions at √{ s} = 2.76 and 5.5 TeV. The centrality dependence is surprisingly independent of the colliding energy similar to that in Au + Au collisions at RHIC for most of centralities starting at Npart = 50 (100) at √{ s} = 2.76 (7) TeV. However, it becomes stronger in peripheral collisions at higher colliding energies.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.
2014-01-01
The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.
Error Analysis of CM Data Products Sources of Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole
This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across amore » wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.« less
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
NASA Astrophysics Data System (ADS)
Ambrosio, M.; Antolini, R.; Aramo, C.; Auriemma, G.; Baldini, A.; Barbarino, G. C.; Barish, B. C.; Battistoni, G.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bisi, V.; Bloise, C.; Bower, C.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Carboni, M.; Castellano, M.; Cecchini, S.; Cei, F.; Chiarella, V.; Coutu, S.; de Benedictis, L.; de Cataldo, G.; Dekhissi, H.; de Marzo, C.; de Mitri, I.; de Vincenzi, M.; di Credico, A.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Guarnaccia, P.; Gustavino, C.; Habig, A.; Hanson, K.; Hawthorne, A.; Heinz, R.; Iarocci, E.; Katsavounidis, E.; Kearns, E.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D. S.; Lipari, P.; Longley, N. P.; Longo, M. J.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Manzoor, S.; Margiotta Neri, A.; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M. N.; Mazzotta, C.; Michael, D. G.; Mikheyev, S.; Miller, L.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicoló, D.; Nolty, R.; Okada, C.; Orth, C.; Osteria, G.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C. W.; Petrera, S.; Pistilli, P.; Popa, V.; Rainó, A.; Rastelli, A.; Reynoldson, J.; Ronga, F.; Rubizzo, U.; Sanzgiri, A.; Satriano, C.; Satta, L.; Scapparone, E.; Scholberg, K.; Sciubba, A.; Serra-Lugaresi, P.; Severi, M.; Sioli, M.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J. L.; Sulak, L. R.; Surdo, A.; Tarlé, G.; Togo, V.; Walter, C. W.; Webb, R.
1999-03-01
With the aim of discussing the effect of the possible sources of systematic uncertainties in simulation models, the analysis of multiple muon events from the MACRO experiment at Gran Sasso is reviewed. In particular, the predictions from different currently available hadronic interaction models are compared.
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
NASA Astrophysics Data System (ADS)
Hermans, Thomas; Nguyen, Frédéric; Caers, Jef
2015-07-01
In inverse problems, investigating uncertainty in the posterior distribution of model parameters is as important as matching data. In recent years, most efforts have focused on techniques to sample the posterior distribution with reasonable computational costs. Within a Bayesian context, this posterior depends on the prior distribution. However, most of the studies ignore modeling the prior with realistic geological uncertainty. In this paper, we propose a workflow inspired by a Popper-Bayes philosophy that data should first be used to falsify models, then only be considered for matching. We propose a workflow consisting of three steps: (1) in defining the prior, we interpret multiple alternative geological scenarios from literature (architecture of facies) and site-specific data (proportions of facies). Prior spatial uncertainty is modeled using multiple-point geostatistics, where each scenario is defined using a training image. (2) We validate these prior geological scenarios by simulating electrical resistivity tomography (ERT) data on realizations of each scenario and comparing them to field ERT in a lower dimensional space. In this second step, the idea is to probabilistically falsify scenarios with ERT, meaning that scenarios which are incompatible receive an updated probability of zero while compatible scenarios receive a nonzero updated belief. (3) We constrain the hydrogeological model with hydraulic head and ERT using a stochastic search method. The workflow is applied to a synthetic and a field case studies in an alluvial aquifer. This study highlights the importance of considering and estimating prior uncertainty (without data) through a process of probabilistic falsification.
Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie
2017-02-01
Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.
Bennett, Erin R; Clausen, Jay; Linkov, Eugene; Linkov, Igor
2009-11-01
Reliable, up-front information on physical and biological properties of emerging materials is essential before making a decision and investment to formulate, synthesize, scale-up, test, and manufacture a new material for use in both military and civilian applications. Multiple quantitative structure-activity relationships (QSARs) software tools are available for predicting a material's physical/chemical properties and environmental effects. Even though information on emerging materials is often limited, QSAR software output is treated without sufficient uncertainty analysis. We hypothesize that uncertainty and variability in material properties and uncertainty in model prediction can be too large to provide meaningful results. To test this hypothesis, we predicted octanol water partitioning coefficients (logP) for multiple, similar compounds with limited physical-chemical properties using six different commercial logP calculators (KOWWIN, MarvinSketch, ACD/Labs, ALogP, CLogP, SPARC). Analysis was done for materials with largely uncertain properties that were similar, based on molecular formula, to military compounds (RDX, BTTN, TNT) and pharmaceuticals (Carbamazepine, Gemfibrizol). We have also compared QSAR modeling results for a well-studied pesticide and pesticide breakdown product (Atrazine, DDE). Our analysis shows variability due to structural variations of the emerging chemicals may be several orders of magnitude. The model uncertainty across six software packages was very high (10 orders of magnitude) for emerging materials while it was low for traditional chemicals (e.g. Atrazine). Thus the use of QSAR models for emerging materials screening requires extensive model validation and coupling QSAR output with available empirical data and other relevant information.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
How multiple causes combine: independence constraints on causal inference.
Liljeholm, Mimi
2015-01-01
According to the causal power view, two core constraints-that causes occur independently (i.e., no confounding) and influence their effects independently-serve as boundary conditions for causal induction. This study investigated how violations of these constraints modulate uncertainty about the existence and strength of a causal relationship. Participants were presented with pairs of candidate causes that were either confounded or not, and that either interacted or exerted their influences independently. Consistent with the causal power view, uncertainty about the existence and strength of causal relationships was greater when causes were confounded or interacted than when unconfounded and acting independently. An elemental Bayesian causal model captured differences in uncertainty due to confounding but not those due to an interaction. Implications of distinct sources of uncertainty for the selection of contingency information and causal generalization are discussed.
Informative graphing of continuous safety variables relative to normal reference limits.
Breder, Christopher D
2018-05-16
Interpreting graphs of continuous safety variables can be complicated because differences in age, gender, and testing site methodologies data may give rise to multiple reference limits. Furthermore, data below the lower limit of normal are compressed relative to those points above the upper limit of normal. The objective of this study is to develop a graphing technique that addresses these issues and is visually intuitive. A mock dataset with multiple reference ranges is initially used to develop the graphing technique. Formulas are developed for conditions where data are above the upper limit of normal, normal, below the lower limit of normal, and below the lower limit of normal when the data value equals zero. After the formulae are developed, an anonymized dataset from an actual set of trials for an approved drug is evaluated comparing the technique developed in this study to standard graphical methods. Formulas are derived for the novel graphing method based on multiples of the normal limits. The formula for values scaled between the upper and lower limits of normal is a novel application of a readily available scaling formula. The formula for the lower limit of normal is novel and addresses the issue of this value potentially being indeterminate when the result to be scaled as a multiple is zero. The formulae and graphing method described in this study provides a visually intuitive method to graph continuous safety data including laboratory values, vital sign data.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
How well do elderly people cope with uncertainty in a learning task?
Chasseigne, G; Grau, S; Mullet, E; Cama, V
1999-11-01
The relation between age, task complexity and learning performance in a Multiple Cue Probability Learning task was studied by systematically varying the level of uncertainty present in the task, keeping constant the direction of relationships. Four age groups were constituted: young adults (mean age = 21), middle-aged adults (45), elderly people (69) and very elderly people (81). Five uncertainty levels were considered: predictability = 0.96, 0.80, 0.64, 0.48, and 0.32. All relationships involved were direct ones. A strong effect of uncertainty on 'control', a measure of the subject's consistency with respect to a linear model, was found. This effect was essentially a linear one. To each decrement in predictability of the task corresponded an equal decrement in participants' level of control. This level of decrement was the same, regardless of the age of the participant. It can be concluded that elderly people cope with uncertainty in probability learning tasks as well as young adults.
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
Linear, multivariable robust control with a mu perspective
NASA Technical Reports Server (NTRS)
Packard, Andy; Doyle, John; Balas, Gary
1993-01-01
The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.
Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...
2014-11-01
This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less
Short memory fuzzy fusion image recognition schema employing spatial and Fourier descriptors
NASA Astrophysics Data System (ADS)
Raptis, Sotiris N.; Tzafestas, Spyros G.
2001-03-01
Single images quite often do not bear enough information for precise interpretation due to a variety of reasons. Multiple image fusion and adequate integration recently became the state of the art in the pattern recognition field. In this paper presented here and enhanced multiple observation schema is discussed investigating improvements to the baseline fuzzy- probabilistic image fusion methodology. The first innovation introduced consists in considering only a limited but seemingly ore effective part of the uncertainty information obtained by a certain time restricting older uncertainty dependencies and alleviating computational burden that is now needed for short sequence (stored into memory) of samples. The second innovation essentially grouping them into feature-blind object hypotheses. Experiment settings include a sequence of independent views obtained by camera being moved around the investigated object.
Photoneutron cross sections for 59Co : Systematic uncertainties of data from various experiments
NASA Astrophysics Data System (ADS)
Varlamov, V. V.; Davydov, A. I.; Ishkhanov, B. S.
2017-09-01
Data on partial photoneutron reaction cross sections (γ ,1n), (γ ,2n), and (γ ,3n) for 59Co obtained in two experiments carried out at Livermore (USA) were analyzed. The sources of radiation in both experiments were the monoenergetic photon beams from the annihilation in flight of relativistic positrons. The total yield was sorted by the neutron multiplicity, taking into account the difference in the neutron energy spectra for different multiplicity. The two quoted studies differ in the method of determining the neutron. Significant systematic disagreements between the results of the two experiments exist. They are considered to be caused by large systematic uncertainties in partial cross sections, since they do not satisfy physical criteria for reliability of the data. To obtain reliable cross sections of partial and total photoneutron reactions a new method combining experimental data and theoretical evaluation was used. It is based on the experimental neutron yield cross section which is rather independent of neutron multiplicity and the transitional neutron multiplicity functions of the combined photonucleon reaction model (CPNRM). The model transitional multiplicity functions were used for the decomposition of the neutron yield cross section into the contributions of partial reactions. The results of the new evaluation noticeably differ from the partial cross sections obtained in the two experimental studies are under discussion.
Estimating the Heading Direction Using Normal Flow
1994-01-01
understood (Faugeras and Maybank 1990), 3 Kinetic Stabilization under the assumption that optic flow or correspon- dence is known with some uncertainty...accelerometers can achieve very It can easily be shown (Koenderink and van Doom high accuracy, the same is not true for inexpensive 1975; Maybank 1985... Maybank . ’Motion from point matches: Multi- just don’t compute normal flow there (see Section 6). plicity of solutions". Int’l J. Computer Vision 4
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campos, E; Sisterson, Douglas
The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess,more » and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty: 1) identifying how the uncertainty of individual ARM measurements is currently expressed, 2) identifying a consistent approach to measurement uncertainty, and then 3) reclassifying ARM instrument measurement uncertainties in a common framework.« less
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Liu, W. L.; Li, Y. W.
2017-09-01
Large-scale dimensional metrology usually requires a combination of multiple measurement systems, such as laser tracking, total station, laser scanning, coordinate measuring arm and video photogrammetry, etc. Often, the results from different measurement systems must be combined to provide useful results. The coordinate transformation is used to unify coordinate frames in combination; however, coordinate transformation uncertainties directly affect the accuracy of the final measurement results. In this paper, a novel method is proposed for improving the accuracy of coordinate transformation, combining the advantages of the best-fit least-square and radial basis function (RBF) neural networks. First of all, the configuration of coordinate transformation is introduced and a transformation matrix containing seven variables is obtained. Second, the 3D uncertainty of the transformation model and the residual error variable vector are established based on the best-fit least-square. Finally, in order to optimize the uncertainty of the developed seven-variable transformation model, we used the RBF neural network to identify the uncertainty of the dynamic, and unstructured, owing to its great ability to approximate any nonlinear function to the designed accuracy. Intensive experimental studies were conducted to check the validity of the theoretical results. The results show that the mean error of coordinate transformation decreased from 0.078 mm to 0.054 mm after using this method in contrast with the GUM method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Boithias, Laurie; Terrado, Marta; Corominas, Lluís; Ziv, Guy; Kumar, Vikas; Marqués, Montse; Schuhmacher, Marta; Acuña, Vicenç
2016-02-01
Ecosystem services provide multiple benefits to human wellbeing and are increasingly considered by policy-makers in environmental management. However, the uncertainty related with the monetary valuation of these benefits is not yet adequately defined or integrated by policy-makers. Given this background, our aim was to quantify different sources of uncertainty when performing monetary valuation of ecosystem services, in order to provide a series of guidelines to reduce them. With an example of 4 ecosystem services (i.e., water provisioning, waste treatment, erosion protection, and habitat for species) provided at the river basin scale, we quantified the uncertainty associated with the following sources: (1) the number of services considered, (2) the number of benefits considered for each service, (3) the valuation metrics (i.e. valuation methods) used to value benefits, and (4) the uncertainty of the parameters included in the valuation metrics. Results indicate that the highest uncertainty was caused by the number of services considered, as well as by the number of benefits considered for each service, whereas the parametric uncertainty was similar to the one related to the selection of valuation metric, thus suggesting that the parametric uncertainty, which is the only uncertainty type commonly considered, was less critical than the structural uncertainty, which is in turn mainly dependent on the decision-making context. Given the uncertainty associated to the valuation structure, special attention should be given to the selection of services, benefits and metrics according to a given context. Copyright © 2015 Elsevier B.V. All rights reserved.
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm
NASA Astrophysics Data System (ADS)
Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy
2016-01-01
The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.
Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach
van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.
2015-01-01
Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0.17 when only one hazard is considered and a score of 0.37 when multiple hazards are considered simultaneously. The LHIs with the most predictive skill were ‘Inundation depth’ and ‘Wave attack’. The Bayesian Network approach has several advantages over the market-standard stage-damage functions: the predictive capacity of multiple indicators can be combined; probabilistic predictions can be obtained, which include uncertainty; and quantitative as well as descriptive information can be used simultaneously.
... both normal and abnormal cells. Exams and Tests Genetic testing can diagnose mosaicism. Tests will likely need to ... of mosaicism may cause confusion and uncertainty. A genetic counselor may help answer any questions about diagnosis and testing. Prevention There is currently no known way to ...
NASA Astrophysics Data System (ADS)
McCall, Keisha C.
Identification and monitoring of sub-tumor targets will be a critical step for optimal design and evaluation of cancer therapies in general and biologically targeted radiotherapy (dose-painting) in particular. Quantitative PET imaging may be an important tool for these applications. Currently radiotherapy planning accounts for tumor motion by applying geometric margins. These margins create a motion envelope to encompass the most probable positions of the tumor, while also maintaining the appropriate tumor control and normal tissue complication probabilities. This motion envelope is effective for uniform dose prescriptions where the therapeutic dose is conformed to the external margins of the tumor. However, much research is needed to establish the equivalent margins for non-uniform fields, where multiple biological targets are present and each target is prescribed its own dose level. Additionally, the size of the biological targets and close proximity make it impractical to apply planning margins on the sub-tumor level. Also, the extent of high dose regions must be limited to avoid excessive dose to the surrounding tissue. As such, this research project is an investigation of the uncertainty within quantitative PET images of moving and displaced dose-painting targets, and an investigation of the residual errors that remain after motion management. This included characterization of the changes in PET voxel-values as objects are moved relative to the discrete sampling interval of PET imaging systems (SPECIFIC AIM 1). Additionally, the repeatability of PET distributions and the delineating dose-painting targets were measured (SPECIFIC AIM 2). The effect of imaging uncertainty on the dose distributions designed using these images (SPECIFIC AIM 3) has also been investigated. This project also included analysis of methods to minimize motion during PET imaging and reduce the dosimetric impact of motion/position-induced imaging uncertainty (SPECIFIC AIM 4).
NASA Astrophysics Data System (ADS)
Reinisch, E. C.; Ali, S. T.; Cardiff, M. A.; Morency, C.; Kreemer, C.; Feigl, K. L.; Team, P.
2016-12-01
Time-dependent deformation has been observed at Brady Hot Springs using interferometric synthetic aperture radar (InSAR) [Ali et al. 2016, http://dx.doi.org/10.1016/j.geothermics.2016.01.008]. Our goal is to evaluate multiple competing hypotheses to explain the observed deformation at Brady. To do so requires statistical tests that account for uncertainty. Graph theory is useful for such an analysis of InSAR data [Reinisch, et al. 2016, http://dx.doi.org/10.1007/s00190-016-0934-5]. In particular, the normalized edge Laplacian matrix calculated from the edge-vertex incidence matrix of the graph of the pair-wise data set represents its correlation and leads to a full data covariance matrix in the weighted least squares problem. This formulation also leads to the covariance matrix of the epoch-wise measurements, representing their relative uncertainties. While the formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, the modulo-2π ambiguity of wrapped phase renders the problem non-linear. The conventional practice is to unwrap InSAR phase before modeling, which can introduce mistakes without increasing the corresponding measurement uncertainty. To address this issue, we are applying Bayesian inference. To build the likelihood, we use three different observables: (a) wrapped phase [e.g., Feigl and Thurber 2009, http://dx.doi.org/10.1111/j.1365-246X.2008.03881.x]; (b) range gradients, as defined by Ali and Feigl [2012, http://dx.doi.org/10.1029/2012GC004112]; and (c) unwrapped phase, i.e. range change in mm, which we validate using GPS data. We apply our method to InSAR data taken over Brady Hot Springs geothermal field in Nevada as part of a project entitled "Poroelastic Tomography by Adjoint Inverse Modeling of Data from Seismology, Geodesy, and Hydrology" (PoroTomo) [ http://geoscience.wisc.edu/feigl/porotomo].
Health risk assessment for nanoparticles: A case for using expert judgment
NASA Astrophysics Data System (ADS)
Kandlikar, Milind; Ramachandran, Gurumurthy; Maynard, Andrew; Murdock, Barbara; Toscano, William A.
2007-01-01
Uncertainties in conventional quantitative risk assessment typically relate to values of parameters in risk models. For many environmental contaminants, there is a lack of sufficient information about multiple components of the risk assessment framework. In such cases, the use of default assumptions and extrapolations to fill in the data gaps is a common practice. Nanoparticle risks, however, pose a new form of risk assessment challenge. Besides a lack of data, there is deep scientific uncertainty regarding every aspect of the risk assessment framework: (a) particle characteristics that may affect toxicity; (b) their fate and transport through the environment; (c) the routes of exposure and the metrics by which exposure ought to be measured; (d) the mechanisms of translocation to different parts of the body; and (e) the mechanisms of toxicity and disease. In each of these areas, there are multiple and competing models and hypotheses. These are not merely parametric uncertainties but uncertainties about the choice of the causal mechanisms themselves and the proper model variables to be used, i.e., structural uncertainties. While these uncertainties exist for PM2.5 as well, risk assessment for PM2.5 has avoided dealing with these issues because of a plethora of epidemiological studies. However, such studies don't exist for the case of nanoparticles. Even if such studies are done in the future, they will be very specific to a particular type of engineered nanoparticle and not generalizable to other nanoparticles. Therefore, risk assessment for nanoparticles will have to deal with the various uncertainties that were avoided in the case of PM2.5. Consequently, uncertainties in estimating risks due to nanoparticle exposures may be characterized as `extreme'. This paper proposes a methodology by which risk analysts can cope with such extreme uncertainty. One way to make these problems analytically tractable is to use expert judgment approaches to study the degree of consensus and/or disagreement between experts on different parts of the exposure-response paradigm. This can be done by eliciting judgments from a wide range of experts on different parts of the risk causal chain. We also use examples to illustrate how studying expert consensus/disagreement helps in research prioritization and budget allocation exercises. The expert elicitation can be repeated over the course of several years, over which time, the state of scientific knowledge will also improve and uncertainties may possibly reduce. Results from expert the elicitation exercise can be used by risk managers or managers of funding agencies as a tool for research prioritization.
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Combining information from multiple flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
NASA Astrophysics Data System (ADS)
El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.
2006-11-01
Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.
Uncertainties in climate change projections for viticulture in Portugal
NASA Astrophysics Data System (ADS)
Fraga, Helder; Malheiro, Aureliano C.; Moutinho-Pereira, José; Pinto, Joaquim G.; Santos, João A.
2013-04-01
The assessment of climate change impacts on viticulture is often carried out using regional climate model (RCM) outputs. These studies rely on either multi-model ensembles or on single-model approaches. The RCM-ensembles account for uncertainties inherent to the different models. In this study, using a 16-RCM ensemble under the IPCC A1B scenario, the climate change signal (future minus recent-past, 2041-2070 - 1961-2000) of 4 bioclimatic indices (Huglin Index - HI, Dryness Index - DI, Hydrothermal Index - HyI and CompI - Composite Index) over mainland Portugal is analysed. A normalized interquartile range (NIQR) of the 16-member ensemble for each bioclimatic index is assessed in order to quantify the ensemble uncertainty. The results show significant increases in the HI index over most of Portugal, with higher values in Alentejo, Trás-os-Montes and Douro/Porto wine regions, also depicting very low uncertainty. Conversely, the decreases in the DI pattern throughout the country show large uncertainties, except in Minho (northwestern Portugal), where precipitation reaches the highest amounts in Portugal. The HyI shows significant decreases in northwestern Portugal, with relatively low uncertainty all across the country. The CompI depicts significant decreases over Alentejo and increases over Minho, though decreases over Alentejo reveal high uncertainty, while increases over Minho show low uncertainty. The assessment of the uncertainty in climate change projections is of great relevance for the wine industry. Quantifying this uncertainty is crucial, since different models may lead to quite different outcomes and may thereby be as crucial as climate change itself to the winemaking sector. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692.
Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford
2015-06-01
A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.
Isotopic Compositions of the Elements, 2001
NASA Astrophysics Data System (ADS)
Böhlke, J. K.; de Laeter, J. R.; De Bièvre, P.; Hidaka, H.; Peiser, H. S.; Rosman, K. J. R.; Taylor, P. D. P.
2005-03-01
The Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry completed its last review of the isotopic compositions of the elements as determined by isotope-ratio mass spectrometry in 2001. That review involved a critical evaluation of the published literature, element by element, and forms the basis of the table of the isotopic compositions of the elements (TICE) presented here. For each element, TICE includes evaluated data from the "best measurement" of the isotope abundances in a single sample, along with a set of representative isotope abundances and uncertainties that accommodate known variations in normal terrestrial materials. The representative isotope abundances and uncertainties generally are consistent with the standard atomic weight of the element Ar(E) and its uncertainty U[Ar(E)] recommended by CAWIA in 2001.
Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenghi, S.; Boarin, S.; Ricotti, M. E.
2012-07-01
Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment riskmore » arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)« less
Charged-particle multiplicity at LHC energies
Grosse-Oetringhaus, Jan Fiete
2018-05-24
The talk presents the measurement of the pseudorapidity density and the multiplicity distribution with ALICE at the achieved LHC energies of 0.9 and 2.36 TeV.An overview about multiplicity measurements prior to LHC is given and the related theoretical concepts are briefly discussed.The analysis procedure is presented and the systematic uncertainties are detailed. The applied acceptance corrections and the treatment of diffraction are discussed.The results are compared with model predictions. The validity of KNO scaling in restricted phase space regions is revisited.Â
Essential information: Uncertainty and optimal control of Ebola outbreaks
Li, Shou-Li; Bjornstad, Ottar; Ferrari, Matthew J.; Mummah, Riley; Runge, Michael C.; Fonnesbeck, Christopher J.; Tildesley, Michael J.; Probert, William J. M.; Shea, Katriona
2017-01-01
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
Essential information: Uncertainty and optimal control of Ebola outbreaks.
Li, Shou-Li; Bjørnstad, Ottar N; Ferrari, Matthew J; Mummah, Riley; Runge, Michael C; Fonnesbeck, Christopher J; Tildesley, Michael J; Probert, William J M; Shea, Katriona
2017-05-30
Early resolution of uncertainty during an epidemic outbreak can lead to rapid and efficient decision making, provided that the uncertainty affects prioritization of actions. The wide range in caseload projections for the 2014 Ebola outbreak caused great concern and debate about the utility of models. By coding and running 37 published Ebola models with five candidate interventions, we found that, despite this large variation in caseload projection, the ranking of management options was relatively consistent. Reducing funeral transmission and reducing community transmission were generally ranked as the two best options. Value of information (VoI) analyses show that caseloads could be reduced by 11% by resolving all model-specific uncertainties, with information about model structure accounting for 82% of this reduction and uncertainty about caseload only accounting for 12%. Our study shows that the uncertainty that is of most interest epidemiologically may not be the same as the uncertainty that is most relevant for management. If the goal is to improve management outcomes, then the focus of study should be to identify and resolve those uncertainties that most hinder the choice of an optimal intervention. Our study further shows that simplifying multiple alternative models into a smaller number of relevant groups (here, with shared structure) could streamline the decision-making process and may allow for a better integration of epidemiological modeling and decision making for policy.
Robustness analysis of non-ordinary Petri nets for flexible assembly systems
NASA Astrophysics Data System (ADS)
Hsieh, Fu-Shiung
2010-05-01
Non-ordinary controlled Petri nets (NCPNs) have the advantages to model flexible assembly systems in which multiple identical resources may be required to perform an operation. However, existing studies on NCPNs are still limited. For example, the robustness properties of NCPNs have not been studied. This motivates us to develop an analysis method for NCPNs. Robustness analysis concerns the ability for a system to maintain operation in the presence of uncertainties. It provides an alternative way to analyse a perturbed system without reanalysis. In our previous research, we have analysed the robustness properties of several subclasses of ordinary controlled Petri nets. To study the robustness properties of NCPNs, we augment NCPNs with an uncertainty model, which specifies an upper bound on the uncertainties for each reachable marking. The resulting PN models are called non-ordinary controlled Petri nets with uncertainties (NCPNU). Based on NCPNU, the problem is to characterise the maximal tolerable uncertainties for each reachable marking. The computational complexities to characterise maximal tolerable uncertainties for each reachable marking grow exponentially with the size of the nets. Instead of considering general NCPNU, we limit our scope to a subclass of PN models called non-ordinary controlled flexible assembly Petri net with uncertainties (NCFAPNU) for assembly systems and study its robustness. We will extend the robustness analysis to NCFAPNU. We identify two types of uncertainties under which the liveness of NCFAPNU can be maintained.
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less
Using Bayes factors for multi-factor, biometric authentication
NASA Astrophysics Data System (ADS)
Giffin, A.; Skufca, J. D.; Lao, P. A.
2015-01-01
Multi-factor/multi-modal authentication systems are becoming the de facto industry standard. Traditional methods typically use rates that are point estimates and lack a good measure of uncertainty. Additionally, multiple factors are typically fused together in an ad hoc manner. To be consistent, as well as to establish and make proper use of uncertainties, we use a Bayesian method that will update our estimates and uncertainties as new information presents itself. Our algorithm compares competing classes (such as genuine vs. imposter) using Bayes Factors (BF). The importance of this approach is that we not only accept or reject one model (class), but compare it to others to make a decision. We show using a Receiver Operating Characteristic (ROC) curve that using BF for determining class will always perform at least as well as the traditional combining of factors, such as a voting algorithm. As the uncertainty decreases, the BF result continues to exceed the traditional methods result.
van der Burg, Max Post; Tyre, Andrew J
2011-01-01
Wildlife managers often make decisions under considerable uncertainty. In the most extreme case, a complete lack of data leads to uncertainty that is unquantifiable. Information-gap decision theory deals with assessing management decisions under extreme uncertainty, but it is not widely used in wildlife management. So too, robust population management methods were developed to deal with uncertainties in multiple-model parameters. However, the two methods have not, as yet, been used in tandem to assess population management decisions. We provide a novel combination of the robust population management approach for matrix models with the information-gap decision theory framework for making conservation decisions under extreme uncertainty. We applied our model to the problem of nest survival management in an endangered bird species, the Mountain Plover (Charadrius montanus). Our results showed that matrix sensitivities suggest that nest management is unlikely to have a strong effect on population growth rate, confirming previous analyses. However, given the amount of uncertainty about adult and juvenile survival, our analysis suggested that maximizing nest marking effort was a more robust decision to maintain a stable population. Focusing on the twin concepts of opportunity and robustness in an information-gap model provides a useful method of assessing conservation decisions under extreme uncertainty.
Space Radiation Cancer Risks and Uncertainties for Mars Missions
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Badhwar, G. D.; Saganti, P. B.; Dicello, J. F.
2001-01-01
Projecting cancer risks from exposure to space radiation is highly uncertain because of the absence of data for humans and because of the limited radiobiology data available for estimating late effects from the high-energy and charge (HZE) ions present in the galactic cosmic rays (GCR). Cancer risk projections involve many biological and physical factors, each of which has a differential range of uncertainty due to the lack of data and knowledge. We discuss an uncertainty assessment within the linear-additivity model using the approach of Monte Carlo sampling from subjective error distributions that represent the lack of knowledge in each factor to quantify the overall uncertainty in risk projections. Calculations are performed using the space radiation environment and transport codes for several Mars mission scenarios. This approach leads to estimates of the uncertainties in cancer risk projections of 400-600% for a Mars mission. The uncertainties in the quality factors are dominant. Using safety standards developed for low-Earth orbit, long-term space missions (>90 days) outside the Earth's magnetic field are currently unacceptable if the confidence levels in risk projections are considered. Because GCR exposures involve multiple particle or delta-ray tracks per cellular array, our results suggest that the shape of the dose response at low dose rates may be an additional uncertainty for estimating space radiation risks.
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
The cerebellum and decision making under uncertainty.
Blackwood, Nigel; Ffytche, Dominic; Simmons, Andrew; Bentall, Richard; Murray, Robin; Howard, Robert
2004-06-01
This study aimed to identify the neural basis of probabilistic reasoning, a type of inductive inference that aids decision making under conditions of uncertainty. Eight normal subjects performed two separate two-alternative-choice tasks (the balls in a bottle and personality survey tasks) while undergoing functional magnetic resonance imaging (fMRI). The experimental conditions within each task were chosen so that they differed only in their requirement to make a decision under conditions of uncertainty (probabilistic reasoning and frequency determination required) or under conditions of certainty (frequency determination required). The same visual stimuli and motor responses were used in the experimental conditions. We provide evidence that the neo-cerebellum, in conjunction with the premotor cortex, inferior parietal lobule and medial occipital cortex, mediates the probabilistic inferences that guide decision making under uncertainty. We hypothesise that the neo-cerebellum constructs internal working models of uncertain events in the external world, and that such probabilistic models subserve the predictive capacity central to induction. Copyright 2004 Elsevier B.V.
First Observation of a Baryonic B_{s}^{0} Decay.
Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Balagura, V; Baldini, W; Baranov, A; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baryshnikov, F; Baszczyk, M; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Beiter, A; Bel, L J; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Beranek, S; Berezhnoy, A; Bernet, R; Bertolin, A; Betancourt, C; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Birnkraut, A; Bitadze, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britton, T; Brodzicka, J; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cavallero, G; Cenci, R; Chamont, D; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S F; Chobanova, V; Chrzaszcz, M; Chubykin, A; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C T; Decamp, D; Deckenhoff, M; Del Buono, L; Dembinski, H-P; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Nezza, P; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziewiecki, M; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez, G; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Funk, W; Furfaro, E; Färber, C; Gabriel, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Govorkova, E; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greim, R; Griffith, P; Grillo, L; Gruber, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hatch, M; He, J; Head, T; Heister, A; Hennessy, K; Henrard, P; Henry, L; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, P H; Huard, Z-C; Hulsbergen, W; Humair, T; Hushchyn, M; Hutchcroft, D; Idzik, M; Ilten, P; Jacobsson, R; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Karacson, M; Kariuki, J M; Karodia, S; Kecke, M; Kelsey, M; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Klimkovich, T; Koliiev, S; Kolpin, M; Komarov, I; Kopecna, R; Koppenburg, P; Kosmyntseva, A; Kotriakhova, S; Kozeiha, M; Kravchuk, L; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, T; Li, Y; Li, Z; Likhomanenko, T; Lindner, R; Lionetto, F; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Maddock, B; Maev, O; Maguire, K; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marinangeli, M; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurice, E; Maurin, B; Mazurov, A; McCann, M; McNab, A; McNulty, R; Meadows, B; Meier, F; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morello, M J; Morgunova, O; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Nogay, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Ossowska, A; Otalora Goicochea, J M; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palutan, M; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Placinta, V; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poli Lener, M; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Ponce, S; Popov, A; Popov, D; Popovici, B; Poslavskii, S; Potterat, C; Price, E; Prisciandaro, J; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, C; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Ravonel Salzgeber, M; Reboud, M; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Gonzalo, D; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schreiner, H F; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Soares Lavra, L; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stevens, H; Stoica, S; Stone, S; Storaci, B; Stracka, S; Stramaglia, M E; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Toriello, F; Tourinho Jadallah Aoude, R; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagner, A; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Verlage, T A; Vernet, M; Vesterinen, M; Viana Barbosa, J V; Viaud, B; Vieira, D; Vieites Diaz, M; Viemann, H; Vilasis-Cardona, X; Vitti, M; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Winn, M A; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yang, Z; Yao, Y; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zheng, Y; Zhu, X; Zhukov, V; Zonneveld, J B; Zucchelli, S
2017-07-28
We report the first observation of a baryonic B_{s}^{0} decay, B_{s}^{0}→pΛ[over ¯]K^{-}, using proton-proton collision data recorded by the LHCb experiment at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb^{-1}. The branching fraction is measured to be B(B_{s}^{0}→pΛ[over ¯]K^{-})+B(B_{s}^{0}→p[over ¯]ΛK^{+})=[5.46±0.61±0.57±0.50(B)±0.32(f_{s}/f_{d})]×10^{-6}, where the first uncertainty is statistical and the second systematic, the third uncertainty accounts for the experimental uncertainty on the branching fraction of the B^{0}→pΛ[over ¯]π^{-} decay used for normalization, and the fourth uncertainty relates to the knowledge of the ratio of b-quark hadronization probabilities f_{s}/f_{d}.
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Zupanski, Milija
2018-01-01
In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Application of data fusion modeling (DFM) to site characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, D.W.; Gibbs, B.P.; Jones, W.F.
1996-01-01
Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less
Application of data fusion modeling (DFM) to site characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, D.W.; Gibbs, B.P.; Jones, W.F.
1996-12-31
Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less
Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis
NASA Astrophysics Data System (ADS)
Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles
2018-03-01
We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.
NASA Astrophysics Data System (ADS)
Hyeon, Changbong; Hwang, Wonseok
2017-07-01
Using Brownian motion in periodic potentials V (x ) tilted by a force f , we provide physical insight into the thermodynamic uncertainty relation, a recently conjectured principle for statistical errors and irreversible heat dissipation in nonequilibrium steady states. According to the relation, nonequilibrium output generated from dissipative processes necessarily incurs an energetic cost or heat dissipation q , and in order to limit the output fluctuation within a relative uncertainty ɛ , at least 2 kBT /ɛ2 of heat must be dissipated. Our model shows that this bound is attained not only at near-equilibrium [f ≪V'(x ) ] but also at far-from-equilibrium [f ≫V'(x ) ] , more generally when the dissipated heat is normally distributed. Furthermore, the energetic cost is maximized near the critical force when the barrier separating the potential wells is about to vanish and the fluctuation of Brownian particles is maximized. These findings indicate that the deviation of heat distribution from Gaussianity gives rise to the inequality of the uncertainty relation, further clarifying the meaning of the uncertainty relation. Our derivation of the uncertainty relation also recognizes a bound of nonequilibrium fluctuations that the variance of dissipated heat (σq2) increases with its mean (μq), and it cannot be smaller than 2 kBT μq .
Buoyancy contribution to uncertainty of mass, conventional mass and force
NASA Astrophysics Data System (ADS)
Malengo, Andrea; Bich, Walter
2016-04-01
The conventional mass is a useful concept introduced to reduce the impact of the buoyancy correction in everyday mass measurements, thus avoiding in most cases its accurate determination, necessary in measurements of ‘true’ mass. Although usage of conventional mass is universal and standardized, the concept is considered as a sort of second-choice tool, to be avoided in high-accuracy applications. In this paper we show that this is a false belief, by elucidating the role played by covariances between volume and mass and between volume and conventional mass at the various stages of the dissemination chain and in the relationship between the uncertainties of mass and conventional mass. We arrive at somewhat counter-intuitive results: the volume of the transfer standard plays a comparatively minor role in the uncertainty budget of the standard under calibration. In addition, conventional mass is preferable to mass in normal, in-air operation, as its uncertainty is smaller than that of mass, if covariance terms are properly taken into account, and the uncertainty over-stating (typically) resulting from neglecting them is less severe than that (always) occurring with mass. The same considerations hold for force. In this respect, we show that the associated uncertainty is the same using mass or conventional mass, and, again, that the latter is preferable if covariance terms are neglected.
Hyeon, Changbong; Hwang, Wonseok
2017-07-01
Using Brownian motion in periodic potentials V(x) tilted by a force f, we provide physical insight into the thermodynamic uncertainty relation, a recently conjectured principle for statistical errors and irreversible heat dissipation in nonequilibrium steady states. According to the relation, nonequilibrium output generated from dissipative processes necessarily incurs an energetic cost or heat dissipation q, and in order to limit the output fluctuation within a relative uncertainty ε, at least 2k_{B}T/ε^{2} of heat must be dissipated. Our model shows that this bound is attained not only at near-equilibrium [f≪V^{'}(x)] but also at far-from-equilibrium [f≫V^{'}(x)], more generally when the dissipated heat is normally distributed. Furthermore, the energetic cost is maximized near the critical force when the barrier separating the potential wells is about to vanish and the fluctuation of Brownian particles is maximized. These findings indicate that the deviation of heat distribution from Gaussianity gives rise to the inequality of the uncertainty relation, further clarifying the meaning of the uncertainty relation. Our derivation of the uncertainty relation also recognizes a bound of nonequilibrium fluctuations that the variance of dissipated heat (σ_{q}^{2}) increases with its mean (μ_{q}), and it cannot be smaller than 2k_{B}Tμ_{q}.
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
NASA Astrophysics Data System (ADS)
Jiang, Runqing
Intensity-modulated radiation therapy (IMRT) uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI) during treatment delivery, a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or "blurred" dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). A Gaussian probability density function (PDF) is reasonable for modeling geometric uncertainties as indicated by the 118 patients group. The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. In addition, individual SD should not be determined or predicted from small imaging samples because of random nature of the fluctuations. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. Cumulative PDF data can be used for re-planning to assess accuracy of delivered dose. Group data is useful for determining worst case discrepancy between planned and delivered dose. The margins for the PTV should ideally represent true geometric uncertainties. The measured geometric uncertainties were used in this thesis to assess PTV coverage, dose to OAR, equivalent uniform dose per fraction (EUDf) and NTCP. The dose distribution including geometric uncertainties was determined from integration of the convolution of the static dose gradient with the PDF. Integration of the convolution of the static dose and derivative of the PDF can also be used to determine the dose including geometric uncertainties although this method was not investigated in detail. Local maximum dose gradient (LMDG) was determined via optimization of dose objective function by manually adjusting DVH control points or selecting beam numbers and directions during IMRT treatment planning. Minimum SD (SDmin) is used when geometric uncertainty is corrected with verification imaging. Maximum SD (SDmax) is used when the geometric uncertainty is known to be large and difficult to manage. SDmax was 4.38 mm in anterior-posterior (AP) direction, 2.70 mm in left-right (LR) direction and 4.35 mm in superior-inferior (SI) direction; SDmin was 1.1 mm in all three directions if less than 2 mm threshold was used for uncorrected fractions in every direction. EUDf is a useful QA parameter for interpreting the biological impact of geometric uncertainties on the static dose distribution. The EUD f has been used as the basis for the time-course NTCP evaluation in the thesis. Relative NTCP values are useful for comparative QA checking by normalizing known complications (e.g. reported in the RTOG studies) to specific DVH control points. For prostate cancer patients, rectal complications were evaluated from specific RTOG clinical trials and detailed evaluation of the treatment techniques (e.g. dose prescription, DVH, number of beams, bean angles). Treatment plans that did not meet DVH constraints represented additional complication risk. Geometric uncertainties improved or worsened rectal NTCP depending on individual internal organ motion within patient.
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.
Evaluating Precipitation from Orbital Data Products of TRMM and GPM over the Indian Subcontinent
NASA Astrophysics Data System (ADS)
Jayaluxmi, I.; Kumar, D. N.
2015-12-01
The rapidly growing records of microwave based precipitation data made available from various earth observation satellites have instigated a pressing need towards evaluating the associated uncertainty which arise from different sources such as retrieval error, spatial/temporal sampling error and sensor dependent error. Pertaining to microwave remote sensing, most of the studies in literature focus on gridded data products, fewer studies exist on evaluating the uncertainty inherent in orbital data products. Evaluation of the latter are essential as they potentially cause large uncertainties during real time flood forecasting studies especially at the watershed scale. The present study evaluates the uncertainty of precipitation data derived from the orbital data products of the Tropical Rainfall Measuring Mission (TRMM) satellite namely the 2A12, 2A25 and 2B31 products. Case study results over the flood prone basin of Mahanadi, India, are analyzed for precipitation uncertainty through these three facets viz., a) Uncertainty quantification using the volumetric metrics from the contingency table [Aghakouchak and Mehran 2014] b) Error characterization using additive and multiplicative error models c) Error decomposition to identify systematic and random errors d) Comparative assessment with the orbital data from GPM mission. The homoscedastic random errors from multiplicative error models justify a better representation of precipitation estimates by the 2A12 algorithm. It can be concluded that although the radiometer derived 2A12 precipitation data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that they are in excellent agreement with the reference estimates for the data period considered [Indu and Kumar 2015]. References A. AghaKouchak and A. Mehran (2014), Extended contingency table: Performance metrics for satellite observations and climate model simulations, Water Resources Research, vol. 49, 7144-7149; J. Indu and D. Nagesh Kumar (2015), Evaluation of Precipitation Retrievals from Orbital Data Products of TRMM over a Subtropical basin in India, IEEE Transactions on Geoscience and Remote Sensing, in press, doi: 10.1109/TGRS.2015.2440338.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
An Interval Type-2 Fuzzy Multiple Echelon Supply Chain Model
NASA Astrophysics Data System (ADS)
Miller, Simon; John, Robert
Planning resources for a supply chain is a major factor determining its success or failure. In this paper we build on previous work introducing an Interval Type-2 Fuzzy Logic model of a multiple echelon supply chain. It is believed that the additional degree of uncertainty provided by Interval Type-2 Fuzzy Logic will allow for better representation of the uncertainty and vagueness present in resource planning models. First, the subject of Supply Chain Management is introduced, then some background is given on related work using Type-1 Fuzzy Logic. A description of the Interval Type-2 Fuzzy model is given, and a test scenario detailed. A Genetic Algorithm uses the model to search for a near-optimal plan for the scenario. A discussion of the results follows, along with conclusions and details of intended further work.
Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L
2012-01-01
The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123
ERIC Educational Resources Information Center
Mendoza, Cara Ann
2013-01-01
This qualitative study examines the effects of multiple years of layoff notices on first- or second-year, K-12 teachers employed in a Northern California, suburban school district in 2008-2009. During years of budget crisis in California, teachers new to the profession experienced ongoing employment uncertainty. This study endeavored to understand…
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
Palmer, Cameron; Pe’er, Itsik
2016-01-01
Missing data are an unavoidable component of modern statistical genetics. Different array or sequencing technologies cover different single nucleotide polymorphisms (SNPs), leading to a complicated mosaic pattern of missingness where both individual genotypes and entire SNPs are sporadically absent. Such missing data patterns cannot be ignored without introducing bias, yet cannot be inferred exclusively from nonmissing data. In genome-wide association studies, the accepted solution to missingness is to impute missing data using external reference haplotypes. The resulting probabilistic genotypes may be analyzed in the place of genotype calls. A general-purpose paradigm, called Multiple Imputation (MI), is known to model uncertainty in many contexts, yet it is not widely used in association studies. Here, we undertake a systematic evaluation of existing imputed data analysis methods and MI. We characterize biases related to uncertainty in association studies, and find that bias is introduced both at the imputation level, when imputation algorithms generate inconsistent genotype probabilities, and at the association level, when analysis methods inadequately model genotype uncertainty. We find that MI performs at least as well as existing methods or in some cases much better, and provides a straightforward paradigm for adapting existing genotype association methods to uncertain data. PMID:27310603
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.
Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J
2001-09-01
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.
A model-based approach to wildland fire reconstruction using sediment charcoal records
Itter, Malcolm S.; Finley, Andrew O.; Hooten, Mevin B.; Higuera, Philip E.; Marlon, Jennifer R.; Kelly, Ryan; McLachlan, Jason S.
2017-01-01
Lake sediment charcoal records are used in paleoecological analyses to reconstruct fire history, including the identification of past wildland fires. One challenge of applying sediment charcoal records to infer fire history is the separation of charcoal associated with local fire occurrence and charcoal originating from regional fire activity. Despite a variety of methods to identify local fires from sediment charcoal records, an integrated statistical framework for fire reconstruction is lacking. We develop a Bayesian point process model to estimate the probability of fire associated with charcoal counts from individual-lake sediments and estimate mean fire return intervals. A multivariate extension of the model combines records from multiple lakes to reduce uncertainty in local fire identification and estimate a regional mean fire return interval. The univariate and multivariate models are applied to 13 lakes in the Yukon Flats region of Alaska. Both models resulted in similar mean fire return intervals (100–350 years) with reduced uncertainty under the multivariate model due to improved estimation of regional charcoal deposition. The point process model offers an integrated statistical framework for paleofire reconstruction and extends existing methods to infer regional fire history from multiple lake records with uncertainty following directly from posterior distributions.
NASA Astrophysics Data System (ADS)
Blain, E.; Daskalakis, A.; Block, R. C.; Danon, Y.
2017-06-01
The prompt fission neutron spectrum from spontaneous fission of 252Cf is an integral part of several aspects of nuclear data. Not only is the spectrum itself of interest, but neutron detectors often use the spectrum for calibration, and other prompt fission neutron spectra are measured as a ratio to 252Cf. Therefore, reducing the uncertainties in this spectrum will allow for more accurate nuclear data to be available across a wide range of fields. The prompt fission neutron spectrum for the spontaneous fission of 252Cf was measured at Rensselaer Polytechnic Institute using the multiple γ tagging method with a 18.4-ng fission sample. An EJ-301 liquid scintillator fast neutron detector was used to measure the high energy portion of the spectrum, 0.5-7 MeV, and a thin EJ-204 plastic scintillator was used to measure the low energy portion of the spectrum, from 50 keV to 2 MeV. These spectra both show good agreement with the current evaluation of 252Cf and have low associated uncertainties providing a new high precision measurement that helps reduce the uncertainties in the prompt fission neutron spectrum for the spontaneous fission of 252Cf.
Xingling, Shao; Honglun, Wang
2015-07-01
This paper proposes a novel composite integrated guidance and control (IGC) law for missile intercepting against unknown maneuvering target with multiple uncertainties and control constraint. First, by using back-stepping technique, the proposed IGC law design is separated into guidance loop and control loop. The unknown target maneuvers and variations of aerodynamics parameters in guidance and control loop are viewed as uncertainties, which are estimated and compensated by designed model-assisted reduced-order extended state observer (ESO). Second, based on the principle of active disturbance rejection control (ADRC), enhanced feedback linearization (FL) based control law is implemented for the IGC model using the estimates generated by reduced-order ESO. In addition, performance analysis and comparisons between ESO and reduced-order ESO are examined. Nonlinear tracking differentiator is employed to construct the derivative of virtual control command in the control loop. Third, the closed-loop stability for the considered system is established. Finally, the effectiveness of the proposed IGC law in enhanced interception performance such as smooth interception course, improved robustness against multiple uncertainties as well as reduced control consumption during initial phase are demonstrated through simulations. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
1982-01-01
apparent coincidence that the same normalization should do for time and uncertainty with Kenneth Arrow, Michael Boskin, Frank Hahn, Hugh Rose, Amartya ... Sen , and John Wise at various times, and the possible relationship between the structure of a criterion function and an information tree such as that
Enhanced Estimation of Terrestrial Loadings for TMDLs: Normalization Approach
USDA-ARS?s Scientific Manuscript database
TMDL implementation plans to remediate pathogen-impaired streams are usually based on deterministic terrestrial fate and transport (DTFT) models. A novel protocol is proposed that can effectively, efficiently, and explicitly capture the predictive uncertainty of DTFT models used to establish terres...
Percentiles of the product of uncertainty factors for establishing probabilistic reference doses.
Gaylor, D W; Kodell, R L
2000-04-01
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.
Satellite lidar and radar: Key components of the future climate observing system
NASA Astrophysics Data System (ADS)
Winker, D. M.
2017-12-01
Cloud feedbacks represent the dominant source of uncertainties in estimates of climate sensitivity and aerosols represent the largest source of uncertainty in climate forcing. Both observation of long-term changes and observational constraints on the processes responsible for those changes are necessary. The existing 30-year record of passive satellite observations has not yet provided constraints to significantly reduce these uncertainties, though. We now have more than a decade of experience with active sensors flying in the A-Train. These new observations have demonstrated the strengths of active sensors and the benefits of continued and more advanced active sensors. This talk will discuss the multiple roles for active sensors as an essential component of a global climate observing system.
Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir
2008-04-01
We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grantham, K; Santanam, L; Goddu, S
Purpose: We retrospectively evaluate the dosimetric impact of a 3.5% range uncertainty on CTV coverage and normal organ toxicity for a cohort of brain patients. Methods: Twenty treatment plans involving 20 brain cancer patients treated with Mevions S250 were reviewed. Forty uncertain plans were made by changing the ranges in original plans by ±3.5% while keeping all devices unchanged. Fidelity to the original plans was evaluated with gamma index. Changes in generalized equivalent uniform dose (gEUD) were reported for the following structures: CTV coverage, brainstem, optic chiasm, and optic nerves. Comparisons were made by plotting the relevant endpoints from themore » uncertain plans as a function of the same endpoints from the original clinical plan. Results: Gamma-index analysis resulted in a 50% pass rate of the uncertain plans using a 90% passing rate and 3%/3mm criterion. A 9.5% decrease in the slope of gEUD plot for the CTV was observed for the 3.5% downward range shift. However, the change in slope did not result in a gEUD change greater than 1.1% for the CTV. The slopes of the gEUD plots for normal structures increased by 3.1% 3.9% 2.4% and 0.2% for the chiasm, brainstem, left optic nerve and right optic nerve respectively. The maximum deviation from the gEUD of the clinical plan for normal structures was: 64% in the chiasm, 31% for the brainstem, and 19% for both optic nerves. Conclusion: A retrospective review shows moderate radiobiological impact of range uncertainty in passively scattered proton therapy with sporadic catastrophe. The linear regression analysis on the statistical data indicates a systematic deviation of gEUD from treatment planning in the light of range uncertainty.« less
Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Alfonso Albero, A; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Balagura, V; Baldini, W; Baranov, A; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baryshnikov, F; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Beiter, A; Bel, L J; Beliy, N; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Beranek, S; Berezhnoy, A; Bernet, R; Berninghoff, D; Bertholet, E; Bertolin, A; Betancourt, C; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Birnkraut, A; Bitadze, A; Bizzeti, A; Bjørn, M; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britton, T; Brodzicka, J; Brundu, D; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Byczynski, W; Cadeddu, S; Cai, H; Calabrese, R; Calladine, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cavallero, G; Cenci, R; Chamont, D; Chapman, M G; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S F; Chitic, S-G; Chobanova, V; Chrzaszcz, M; Chubykin, A; Ciambrone, P; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collins, P; Colombo, T; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C T; Decamp, D; Del Buono, L; Dembinski, H-P; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Nezza, P; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Douglas, L; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziewiecki, M; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Federici, L; Ferguson, D; Fernandez, G; Fernandez Declara, P; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Funk, W; Furfaro, E; Färber, C; Gabriel, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Govorkova, E; Grabowski, J P; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greim, R; Griffith, P; Grillo, L; Gruber, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hamilton, B; Han, X; Hancock, T H; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hasse, C; Hatch, M; He, J; Hecker, M; Heinicke, K; Heister, A; Hennessy, K; Henrard, P; Henry, L; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, P H; Huard, Z C; Hulsbergen, W; Humair, T; Hushchyn, M; Hutchcroft, D; Ibis, P; Idzik, M; Ilten, P; Jacobsson, R; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Karacson, M; Kariuki, J M; Karodia, S; Kazeev, N; Kecke, M; Kelsey, M; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Klimkovich, T; Koliiev, S; Kolpin, M; Komarov, I; Kopecna, R; Koppenburg, P; Kosmyntseva, A; Kotriakhova, S; Kozeiha, M; Kravchuk, L; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, P-R; Li, T; Li, Y; Li, Z; Likhomanenko, T; Lindner, R; Lionetto, F; Liu, X; Loh, D; Loi, A; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Macko, V; Mackowiak, P; Maddrell-Mander, S; Maev, O; Maguire, K; Maisuzenko, D; Majewski, M W; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Marangotto, D; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marinangeli, M; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurice, E; Maurin, B; Mazurov, A; McCann, M; McNab, A; McNulty, R; Mead, J V; Meadows, B; Meaux, C; Meier, F; Meinert, N; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Millard, E; Minard, M-N; Minzoni, L; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Mombächer, T; Monroy, I A; Monteil, S; Morandin, M; Morello, M J; Morgunova, O; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Nogay, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Ossowska, A; Otalora Goicochea, J M; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palutan, M; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pisani, F; Pistone, A; Piucci, A; Placinta, V; Playfer, S; Plo Casasus, M; Polci, F; Poli Lener, M; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Ponce, S; Popov, A; Popov, D; Poslavskii, S; Potterat, C; Price, E; Prisciandaro, J; Prouve, C; Pugatch, V; Puig Navarro, A; Pullen, H; Punzi, G; Qian, W; Quagliani, R; Quintana, B; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Ravonel Salzgeber, M; Reboud, M; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Ruiz Vidal, J; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarpis, G; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schreiner, H F; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Soares Lavra, L; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stepanova, M; Stevens, H; Stone, S; Storaci, B; Stracka, S; Stramaglia, M E; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; Szymanski, M; T'Jampens, S; Tayduganov, A; Tekampe, T; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Toriello, F; Tourinho Jadallah Aoude, R; Tournefier, E; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagner, A; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Verlage, T A; Vernet, M; Vesterinen, M; Viana Barbosa, J V; Viaud, B; Vieira, D; Vieites Diaz, M; Viemann, H; Vilasis-Cardona, X; Vitti, M; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Winn, M; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yang, Z; Yao, Y; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zheng, Y; Zhu, X; Zhukov, V; Zonneveld, J B; Zucchelli, S
2018-04-27
The ratio of branching fractions R(D^{*-})≡B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}μ^{+}ν_{μ}) is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb^{-1}. For the first time, R(D^{*-}) is determined using the τ-lepton decays with three charged pions in the final state. The B^{0}→D^{*-}τ^{+}ν_{τ} yield is normalized to that of the B^{0}→D^{*-}π^{+}π^{-}π^{+} mode, providing a measurement of B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}π^{+}π^{-}π^{+})=1.97±0.13±0.18, where the first uncertainty is statistical and the second systematic. The value of B(B^{0}→D^{*-}τ^{+}ν_{τ})=(1.42±0.094±0.129±0.054)% is obtained, where the third uncertainty is due to the limited knowledge of the branching fraction of the normalization mode. Using the well-measured branching fraction of the B^{0}→D^{*-}μ^{+}ν_{μ} decay, a value of R(D^{*-})=0.291±0.019±0.026±0.013 is established, where the third uncertainty is due to the limited knowledge of the branching fractions of the normalization and B^{0}→D^{*-}μ^{+}ν_{μ} modes. This measurement is in agreement with the standard model prediction and with previous results.
NASA Astrophysics Data System (ADS)
Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Alfonso Albero, A.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Beliy, N.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Berninghoff, D.; Bertholet, E.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bitadze, A.; Bizzeti, A.; Bjørn, M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bonivento, W.; Bordyuzhin, I.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britton, T.; Brodzicka, J.; Brundu, D.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Byczynski, W.; Cadeddu, S.; Cai, H.; Calabrese, R.; Calladine, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Chapman, M. G.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chitic, S.-G.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Ciambrone, P.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Colombo, T.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Douglas, L.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dungs, K.; Durante, P.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Déléage, N.; Easo, S.; Ebert, M.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fay, R.; Fazzini, D.; Federici, L.; Ferguson, D.; Fernandez, G.; Fernandez Declara, P.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Garsed, P. J.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianı, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Grabowski, J. P.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hancock, T. H.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hasse, C.; Hatch, M.; He, J.; Hecker, M.; Heinicke, K.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hombach, C.; Hopchev, P. H.; Huard, Z. C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Ibis, P.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kazeev, N.; Kecke, M.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Komarov, I.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, P.-R.; Li, T.; Li, Y.; Li, Z.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Liu, X.; Loh, D.; Loi, A.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Macko, V.; Mackowiak, P.; Maddrell-Mander, S.; Maev, O.; Maguire, K.; Maisuzenko, D.; Majewski, M. W.; Malde, S.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Manning, P.; Marangotto, D.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Mead, J. V.; Meadows, B.; Meaux, C.; Meier, F.; Meinert, N.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Millard, E.; Minard, M.-N.; Minzoni, L.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Mombächer, T.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Mussini, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pisani, F.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Pullen, H.; Punzi, G.; Qian, W.; Quagliani, R.; Quintana, B.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Ruiz Vidal, J.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarpis, G.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubert, K.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stepanova, M.; Stevens, H.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; Szymanski, M.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Toriello, F.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.; LHCb Collaboration
2018-04-01
The ratio of branching fractions R (D*-)≡B (B0→D*-τ+ντ)/B (B0→D*-μ+νμ) is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb-1. For the first time, R (D*-) is determined using the τ -lepton decays with three charged pions in the final state. The B0→D*-τ+ντ yield is normalized to that of the B0→D*-π+π-π+ mode, providing a measurement of B (B0→D*-τ+ντ)/B (B0→D*-π+π-π+)=1.97 ±0.13 ±0.18 , where the first uncertainty is statistical and the second systematic. The value of B (B0→D*-τ+ντ)=(1.42 ±0.094 ±0.129 ±0.054 )% is obtained, where the third uncertainty is due to the limited knowledge of the branching fraction of the normalization mode. Using the well-measured branching fraction of the B0→D*-μ+νμ decay, a value of R (D*-)=0.291 ±0.019 ±0.026 ±0.013 is established, where the third uncertainty is due to the limited knowledge of the branching fractions of the normalization and B0→D*-μ+νμ modes. This measurement is in agreement with the standard model prediction and with previous results.
Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.
Vilhelmsen, Troels N; Ferré, Ty P A
2018-05-01
Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.
Wavelengths and intensities of a platinum/neon hollow cathode lamp in the region 1100-4000 A
NASA Technical Reports Server (NTRS)
Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Sansonetti, Jean E.
1990-01-01
The spectrum of a platinum hollow cathode lamp containing neon carrier gas was recorded photographically and photoelectrically with a 10.7 m normal-incidence vacuum spectrograph. Wavelengths and intensities were determined for about 3000 lines in the region 1100-4000 A. The uncertainty of the measured wavelengths is estimated to be + or - 0.0020 A. Ritz-type wavelengths are given for about 550 classified lines of Pt II with uncertainites varying from + or - 0.0004 A to + or - 0.0025 A. The uncertainty of the relative intensities is estimated to be about 20 percent.
NASA Astrophysics Data System (ADS)
Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.
2012-05-01
The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Neudecker, D.; Talou, P.; Kawano, T.; ...
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k eff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k eff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neudecker, D.; Talou, P.; Kawano, T.
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of (PU)-P-239 induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talon et al. (2010), surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data These improvements lead to changes in the evaluated PENS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented. which lead to more reasonable evaluated uncertainties. The calculated k(eff) of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k(eff) one standard deviations overlap with some of those obtained using ENDF/B-VILl, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,) and (n,f) reactions, and show improvements for highenergy threshold (n,2n) reactions compared to ENDF/B-VII.l. (C) 2015 Elsevier B.V. All rights reserved.« less
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Weaver, Jesse R.
In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less
Experimental and modeling uncertainties in the validation of lower hybrid current drive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, F. M.; Bonoli, P. T.; Chilenski, M.
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
Highly efficient evaluation of a gas mixer using a hollow waveguide based laser spectral sensor
NASA Astrophysics Data System (ADS)
Du, Z.; Yang, X.; Li, J.; Yang, Y.; Qiao, C.
2017-05-01
This paper aims to provide a fast, sensitive, and accurate characterization of a Mass Flow Controller (MFC) based gas mixer. The gas mixer was evaluated by using a hollow waveguide based laser spectral sensor with high efficiency. Benefiting from the sensor's fast response, high sensitivity and continuous operation, multiple key parameters of the mixer, including mixing uncertainty, linearity, and response time, were acquired by a one-round test. The test results show that the mixer can blend multi-compound gases quite efficiently with an uncertainty of 1.44% occurring at a flow rate of 500 ml/min, with the linearity of 0.998 43 and the response time of 92.6 s. The results' reliability was confirmed by the relative measurement of gas concentration, in which the isolation of the sensor's uncertainty was conducted. The measured uncertainty has shown well coincidence with the theoretical uncertainties of the mixer, which proves the method to be a reliable characterization. Consequently, this sort of laser based characterization's wide appliance on gas analyzer's evaluations is demonstrated.
Highly efficient evaluation of a gas mixer using a hollow waveguide based laser spectral sensor.
Du, Z; Yang, X; Li, J; Yang, Y; Qiao, C
2017-05-01
This paper aims to provide a fast, sensitive, and accurate characterization of a Mass Flow Controller (MFC) based gas mixer. The gas mixer was evaluated by using a hollow waveguide based laser spectral sensor with high efficiency. Benefiting from the sensor's fast response, high sensitivity and continuous operation, multiple key parameters of the mixer, including mixing uncertainty, linearity, and response time, were acquired by a one-round test. The test results show that the mixer can blend multi-compound gases quite efficiently with an uncertainty of 1.44% occurring at a flow rate of 500 ml/min, with the linearity of 0.998 43 and the response time of 92.6 s. The results' reliability was confirmed by the relative measurement of gas concentration, in which the isolation of the sensor's uncertainty was conducted. The measured uncertainty has shown well coincidence with the theoretical uncertainties of the mixer, which proves the method to be a reliable characterization. Consequently, this sort of laser based characterization's wide appliance on gas analyzer's evaluations is demonstrated.
Experimental and modeling uncertainties in the validation of lower hybrid current drive
Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...
2016-07-28
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
A Framework for Understanding Experiments
2008-06-01
operations. Experiments that emphasize free play and uncertainty in scenarios reflect conditions found in existent operations and satisfy external...validity Requirement 4, the ability to relate results. Conversely, experiments emphasizing similar conditions with diminished free play across multiple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Qingliang; Guo Zhouyi; Wei Huajiang
2011-10-31
Depth-resolved monitoring with differentiation and quantification of glucose diffusion in healthy and abnormal esophagus tissues has been studied in vitro. Experiments have been performed using human normal esophagus and esophageal squamous cell carcinoma (ESCC) tissues by the optical coherence tomography (OCT). The images have been continuously acquired for 120 min in the experiments, and the depth-resolved and average permeability coefficients of the 40 % glucose solution have been calculated by the OCT amplitude (OCTA) method. We demonstrate the capability of the OCT technique for depth-resolved monitoring, differentiation, and quantifying of glucose diffusion in normal esophagus and ESCC tissues. It ismore » found that the permeability coefficients of the 40 % glucose solution are not uniform throughout the normal esophagus and ESCC tissues and increase from (3.30 {+-} 0.09) Multiplication-Sign 10{sup -6} and (1.57 {+-} 0.05) Multiplication-Sign 10{sup -5} cm s{sup -1} at the mucous membrane of normal esophagus and ESCC tissues to (1.82 {+-} 0.04) Multiplication-Sign 10{sup -5} and (3.53 {+-} 0.09) Multiplication-Sign 10{sup -5} cm s{sup -1} at the submucous layer approximately 742 {mu}m away from the epithelial surface of normal esophagus and ESCC tissues, respectively. (optical coherence tomography)« less
Can hydraulic-modelled rating curves reduce uncertainty in high flow data?
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Lam, Norris; Lyon, Steve W.
2017-04-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.
Integrating data types to enhance shoreline change assessments
NASA Astrophysics Data System (ADS)
Long, J.; Henderson, R.; Plant, N. G.; Nelson, P. R.
2016-12-01
Shorelines represent the variable boundary between terrestrial and marine environments. Assessment of geographic and temporal variability in shoreline position and related variability in shoreline change rates are an important part of studies and applications related to impacts from sea-level rise and storms. The results from these assessments are used to quantify future ecosystem services and coastal resilience and guide selection of appropriate coastal restoration and protection designs. But existing assessments typically fail to incorporate all available shoreline observations because they are derived from multiple data types and have different or unknown biases and uncertainties. Shoreline-change research and assessments often focus on either the long-term trajectory using sparse data over multiple decades or shorter-term evolution using data collected more frequently but over a shorter period of time. The combination of data collected with significantly different temporal resolution is not often considered. Also, differences in the definition of the shoreline metric itself can occur, whether using a single or multiple data source(s), due to variation the signal being detected in the data (e.g. instantaneous land/water interface, swash zone, wrack line, or topographic contours). Previous studies have not explored whether more robust shoreline change assessments are possible if all available data are utilized and all uncertainties are considered. In this study, we test the hypothesis that incorporating all available shoreline data will lead to both improved historical assessments and enhance the predictive capability of shoreline-change forecasts. Using over 250 observations of shoreline position at Dauphin Island, Alabama over the last century, we compare shoreline-change rates derived from individual data sources (airborne lidar, satellite, aerial photographs) with an assessment using the combination of all available data. Biases or simple uncertainties in the shoreline metric from different data types and varying temporal/spatial resolution of the data are examined. As part of this test, we also demonstrate application of data assimilation techniques to predict shoreline position by accurately including the uncertainty in each type of data.
Significance of structural changes in proteins: expected errors in refined protein structures.
Stroud, R. M.; Fauman, E. B.
1995-01-01
A quantitative expression key to evaluating significant structural differences or induced shifts between any two protein structures is derived. Because crystallography leads to reports of a single (or sometimes dual) position for each atom, the significance of any structural change based on comparison of two structures depends critically on knowing the expected precision of each median atomic position reported, and on extracting it for each atom, from the information provided in the Protein Data Bank and in the publication. The differences between structures of protein molecules that should be identical, and that are normally distributed, indicating that they are not affected by crystal contacts, were analyzed with respect to many potential indicators of structure precision, so as to extract, essentially by "machine learning" principles, a generally applicable expression involving the highest correlates. Eighteen refined crystal structures from the Protein Data Bank, in which there are multiple molecules in the crystallographic asymmetric unit, were selected and compared. The thermal B factor, the connectivity of the atom, and the ratio of the number of reflections to the number of atoms used in refinement correlate best with the magnitude of the positional differences between regions of the structures that otherwise would be expected to be the same. These results are embodied in a six-parameter equation that can be applied to any crystallographically refined structure to estimate the expected uncertainty in position of each atom. Structure change in a macromolecule can thus be referenced to the expected uncertainty in atomic position as reflected in the variance between otherwise identical structures with the observed values of correlated parameters. PMID:8563637
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
Isotopic compositions of the elements, 2001
Böhlke, J.K.; De Laeter, J. R.; De Bievre, P.; Hidaka, H.; Peiser, H.S.; Rosman, K.J.R.; Taylor, P.D.P.
2005-01-01
The Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry completed its last review of the isotopic compositions of the elements as determined by isotope-ratio mass spectrometry in 2001. That review involved a critical evaluation of the published literature, element by element, and forms the basis of the table of the isotopic compositions of the elements (TICE) presented here. For each element, TICE includes evaluated data from the “best measurement” of the isotope abundances in a single sample, along with a set of representative isotope abundances and uncertainties that accommodate known variations in normal terrestrial materials. The representative isotope abundances and uncertainties generally are consistent with the standard atomic weight of the element Ar(E)">Ar(E)Ar(E) and its uncertainty U[Ar(E)]">U[Ar(E)]U[Ar(E)] recommended by CAWIA in 2001.
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
NASA Astrophysics Data System (ADS)
Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Di Lodovico, F.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunne, P.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Fiorillo, G.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Garcia, A.; Giganti, C.; Gizzarelli, F.; Golan, T.; Gonin, M.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hansen, D.; Harada, J.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hillairet, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kormos, L. L.; Koshio, Y.; Kowalik, K.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Lasorak, P.; Laveder, M.; Lawe, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Petrov, Y.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Rayner, M. A.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rondio, E.; Rossi, B.; Roth, S.; Ruggeri, A. C.; Rychter, A.; Sakashita, K.; Sánchez, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Tsukamoto, T.; Tzanov, M.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Walter, C. W.; Wark, D.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; T2K Collaboration
2017-11-01
The T2K experiment reports an updated analysis of neutrino and antineutrino oscillations in appearance and disappearance channels. A sample of electron neutrino candidates at Super-Kamiokande in which a pion decay has been tagged is added to the four single-ring samples used in previous T2K oscillation analyses. Through combined analyses of these five samples, simultaneous measurements of four oscillation parameters, |Δ m322 |, sin2θ23, sin2θ13, and δCP and of the mass ordering are made. A set of studies of simulated data indicates that the sensitivity to the oscillation parameters is not limited by neutrino interaction model uncertainty. Multiple oscillation analyses are performed, and frequentist and Bayesian intervals are presented for combinations of the oscillation parameters with and without the inclusion of reactor constraints on sin2θ13. When combined with reactor measurements, the hypothesis of C P conservation (δCP=0 or π ) is excluded at 90% confidence level. The 90% confidence region for δCP is [-2.95 ,-0.44 ] ([-1.47 ,-1.27 ] ) for normal (inverted) ordering. The central values and 68% confidence intervals for the other oscillation parameters for normal (inverted) ordering are Δ m322=2.54 ±0.08 (2.51 ±0.08 )×10-3 eV2/c4 and sin2θ23 =0.5 5-0.09+0.05 (0.5 5-0.08+0.05), compatible with maximal mixing. In the Bayesian analysis, the data weakly prefer normal ordering (Bayes factor 3.7) and the upper octant for sin2θ23 (Bayes factor 2.4).
Gibson, F; Hibbins, S; Grew, T; Morgan, S; Pearce, S; Stark, D; Fern, L A
2016-11-01
Young people with cancer exhibit unique needs. During a time of normal physical and psychological change, multiple disease and treatment-related symptoms cause short and long-term physical and psychosocial effects. Little is known about how young people cope with the impact of cancer and its treatment on daily routines and their strategies to manage the challenges of cancer and treatments. We aimed to determine how young people describe these challenges through a social media site. Using the principles of virtual ethnography and watching videos on a social media site we gathered data from young people describing their cancer experience. Qualitative content analysis was employed to analyse and interpret the narrative from longitudinal 'video diaries' by 18 young people equating to 156 films and 27 h and 49 min of recording. Themes were described then organized and clustered into typologies grouping commonalities across themes. Four typologies emerged reflective of the cancer trajectory: treatment and relenting side effects, rehabilitation and getting on with life, relapse, facing more treatment and coming to terms with dying. This study confirms the need for young people to strive towards normality and creating a new normal, even where uncertainty prevailed. Strategies young people used to gain mastery over their illness and the types of stories they choose to tell provide the focus of the main narrative. Social Media sites can be examined as a source of data, to supplement or instead of more traditional routes of data collection known to be practically challenging with this population. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.
NASA Astrophysics Data System (ADS)
Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.
2017-12-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.
NASA Technical Reports Server (NTRS)
Desormeaux, Yves; Rossow, William B.; Brest, Christopher L.; Campbell, G. G.
1993-01-01
Procedures are described for normalizing the radiometric calibration of image radiances obtained from geostationary weather satellites that contributed data to the International Satellite Cloud Climatology Project. The key step is comparison of coincident and collocated measurements made by each satellite and the concurrent AVHRR on the 'afternoon' NOAA polar-orbiting weather satellite at the same viewing geometry. The results of this comparison allow transfer of the AVHRR absolute calibration, which has been established over the whole series, to the radiometers on the geostationary satellites. Results are given for Meteosat-2, 3, and 4, for GOES-5, 6, and 7, for GMS-2, 3, and 4 and for Insat-1B. The relative stability of the calibrations of these radiance data is estimated to be within +/- 3 percent; the uncertainty of the absolute calibrations is estimated to be less than 10 percent. The remaining uncertainties are at least two times smaller than for the original radiance data.
Håkstad, Ragnhild B; Obstfelder, Aud; Øberg, Gunn Kristin
2016-08-01
Having a preterm infant is a life-altering event for parents. The use of interventions intended to support the parents is recommended. In this study, we investigated how parents' perceptions of physiotherapy in primary health care influenced their adaptation to caring for a preterm child. We conducted 17 interviews involving parents of seven infants, at infants' corrected age (CA) 3, 6, and 12 months. The analysis was a systematic text condensation, connecting to theory of participatory sense-making. The parents described a progression toward a new normalcy in the setting of persistent uncertainty. Physiotherapists can ameliorate this uncertainty and support the parents' progression toward normalization, by providing knowledge and acknowledging both the child as subject and the parent-child relationship. Via embodied interaction and the exploration of their child's capacity, the parents learn about their children's individuality and gain the confidence necessary to support and care for their children in everyday life. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Gunn, Mark A.; Roark, D. Michael
2014-01-01
A seepage investigation was conducted by the U.S. Geological Survey, in cooperation with the New Mexico Interstate Stream Commission, along an approximately 106-mile reach of the Rio Grande from below Caballo Reservoir, New Mexico, to El Paso, Texas, during June 26–28, 2012, to determine gain or loss of streamflow due to seepage to or from the river channel. Discharge measurements were made during the irrigation season at high flow including 5 sites along the Rio Grande, 5 diversions, and 63 inflows. The net gain or loss of flow in the river channel was computed for four reaches within the 106-mile reach of the Rio Grande. The normalized percentage difference was computed for each reach to determine the difference between discharge measured at upstream and downstream sites, and the normalized percentage uncertainty was computed to determine if a computed gain or loss exceeded cumulative uncertainty associated with measurement of discharge.
Subatmospheric vapor pressures evaluated from internal-energy measurements
NASA Astrophysics Data System (ADS)
Duarte-Garza, H. A.; Magee, J. W.
1997-01-01
Vapor pressures were evaluated from measured internal-energy changes in the vapor+liquid two-phase region, Δ U (2). The method employed a thermodynamic relationship between the derivative quantity (ϖ U (2)/ϖ V) T and the vapor pressure ( p σ) and its temperature derivative (ϖ p/ϖ T)σ. This method was applied at temperatures between the triple point and the normal boiling point of three substances: 1,1,1,2-tetrafluoroethane (R134a), pentafluoroethane (R125), and difluoromethane (R32). Agreement with experimentally measured vapor pressures near the normal boiling point (101.325 kPa) was within the experimental uncertainty of approximately ±0.04 kPa (±0.04%). The method was applied to R134a to test the thermodynamic consistency of a published p-p-T equation of state with an equation for p σ for this substance. It was also applied to evaluate published p σ data which are in disagreement by more than their claimed uncertainty.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
McKibbon, K Ann; Fridsma, Douglas B; Crowley, Rebecca S
2007-04-01
The research sought to determine if primary care physicians' attitudes toward risk taking or uncertainty affected how they sought information and used electronic information resources when answering simulated clinical questions. Using physician-supplied data collected from existing risk and uncertainty scales, twenty-five physicians were classified as risk seekers (e.g., enjoying adventure), risk neutral, or risk avoiders (e.g., cautious) and stressed or unstressed by uncertainty. The physicians then answered twenty-three multiple-choice, clinically focused questions and selected two to pursue further using their own information resources. Think-aloud protocols were used to collect searching process and outcome data (e.g., searching time, correctness of answers, searching techniques). No differences in searching outcomes were observed between the groups. Physicians who were risk avoiding and those who reported stress when faced with uncertainty each showed differences in searching processes (e.g., actively analyzing retrieval, using searching heuristics or rules). Physicians who were risk avoiding tended to use resources that provided answers and summaries, such as Cochrane or UpToDate, less than risk-seekers did. Physicians who reported stress when faced with uncertainty showed a trend toward less frequent use of MEDLINE, when compared with physicians who were not stressed by uncertainty. Physicians' attitudes towards risk taking and uncertainty were associated with different searching processes but not outcomes. Awareness of differences in physician attitudes may be key in successful design and implementation of clinical information resources.
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Development of the smartphone-based colorimetry for multi-analyte sensing arrays.
Hong, Jong Il; Chang, Byoung-Yong
2014-05-21
Here we report development of a smartphone app (application) that digitizes the colours of a colorimetric sensor array. A conventional colorimetric sensor array consists of multiple paper-based sensors, and reports the detection results in terms of colour change. Evaluation of the colour changes is normally done by the naked eye, which may cause uncertainties due to personal subjectivity and the surrounding conditions. Solutions have been particularly sought in smartphones as they are capable of spectrometric functions. Our report specifically focuses on development of a practical app for immediate point-of-care (POC) multi-analyte sensing without additional devices. First, the individual positions of the sensors are automatically identified by the smartphone; second, the colours measured at each sensor are digitized based on a correction algorithm; and third, the corrected colours are converted to concentration values by pre-loaded calibration curves. All through these sequential processes, the sensor array taken in a smartphone snapshot undergoes laboratory-level spectrometry. The advantages of inexpensive and convenient paper-based colorimetry and the ubiquitous smartphone are tied to achieve a ready-to-go POC diagnosis.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
García-Diego, Fernando-Juan; Sánchez-Quinche, Angel; Merello, Paloma; Beltrán, Pedro; Peris, Cristófol
2013-01-01
In this study we propose an electronic system for linear positioning of a magnet independent of its modulus, which could vary because of aging, different fabrication process, etc. The system comprises a linear array of 24 Hall Effect sensors of proportional response. The data from all sensors are subject to a pretreatment (normalization) by row (position) making them independent on the temporary variation of its magnetic field strength. We analyze the particular case of the individual flow in milking of goats. The multiple regression analysis allowed us to calibrate the electronic system with a percentage of explanation R2 = 99.96%. In our case, the uncertainty in the linear position of the magnet is 0.51 mm that represents 0.019 L of goat milk. The test in farm compared the results obtained by direct reading of the volume with those obtained by the proposed electronic calibrated system, achieving a percentage of explanation of 99.05%. PMID:23793020
Value Focused Thinking in Developing Aerobatic Aircraft Selection Model for Turkish Air Force
2012-03-22
many reasons . Most problems in decision- making involve multiple objectives and uncertainties. The number of alternatives can be significant and make ...and Republic of Turkey all around the world”. This is a clear and concise statement of the most basic reason for decision. After making interview...Hwang, C.-L. (1995). Multiple Attribute Decison Making : An Introduction. California: Sage Publications. 90 Vita First Lieutenant
ICPL: Intelligent Cooperative Planning and Learning for Multi-agent Systems
2012-02-29
objective was to develop a new planning approach for teams!of multiple UAVs that tightly integrates learning and cooperative!control algorithms at... algorithms at multiple levels of the planning architecture. The research results enabled a team of mobile agents to learn to adapt and react to uncertainty in...expressive representation that incorporates feature conjunctions. Our algorithm is simple to implement, fast to execute, and can be combined with any
Strategy, Theory, Tactical Possibilities and the Design of Amphibious Concepts
2012-05-17
the process of learning and pierce the veil of uncertainty that lies between the protagonists, a gambit must be made—sufficient energy must be...non-military means was again challenging amphibious operations advocates.34 From 1990 – 2010, the Marine Corps conducted approximately 104...power for political ends by using multiple means and approaches to attack multiple centers of gravity and thus collapse an adversary’s system. The
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
Uncertainties in Air Exchange using Continuous-Injection, Long-Term Sampling Tracer-Gas Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherman, Max H.; Walker, Iain S.; Lunden, Melissa M.
2013-12-01
The PerFluorocarbon Tracer (PFT) method is a low-cost approach commonly used for measuring air exchange in buildings using tracer gases. It is a specific application of the more general Continuous-Injection, Long-Term Sampling (CILTS) method. The technique is widely used but there has been little work on understanding the uncertainties (both precision and bias) associated with its use, particularly given that it is typically deployed by untrained or lightly trained people to minimize experimental costs. In this article we will conduct a first-principles error analysis to estimate the uncertainties and then compare that analysis to CILTS measurements that were over-sampled, throughmore » the use of multiple tracers and emitter and sampler distribution patterns, in three houses. We find that the CILTS method can have an overall uncertainty of 10-15percent in ideal circumstances, but that even in highly controlled field experiments done by trained experimenters expected uncertainties are about 20percent. In addition, there are many field conditions (such as open windows) where CILTS is not likely to provide any quantitative data. Even avoiding the worst situations of assumption violations CILTS should be considered as having a something like a ?factor of two? uncertainty for the broad field trials that it is typically used in. We provide guidance on how to deploy CILTS and design the experiment to minimize uncertainties.« less
Analysis of laser fluorosensor systems for remote algae detection and quantification
NASA Technical Reports Server (NTRS)
Browell, E. V.
1977-01-01
The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Speeded induction under uncertainty: the influence of multiple categories and feature conjunctions.
Newell, Ben R; Paton, Helen; Hayes, Brett K; Griffiths, Oren
2010-12-01
When people are uncertain about the category membership of an item (e.g., Is it a dog or a dingo?), research shows that they tend to rely only on the dominant or most likely category when making inductions (e.g., How likely is it to befriend me?). An exception has been reported using speeded induction judgments where participants appeared to use information from multiple categories to make inductions (Verde, Murphy, & Ross, 2005). In two speeded induction studies, we found that participants tended to rely on the frequency with which features co-occurred when making feature predictions, independently of category membership. This pattern held whether categories were considered implicitly (Experiment 1) or explicitly (Experiment 2) prior to feature induction. The results converge with other recent work suggesting that people often rely on feature conjunction information, rather than category boundaries, when making inductions under uncertainty.
Multiple Stressors and the Functioning of Coral Reefs.
Harborne, Alastair R; Rogers, Alice; Bozec, Yves-Marie; Mumby, Peter J
2017-01-03
Coral reefs provide critical services to coastal communities, and these services rely on ecosystem functions threatened by stressors. By summarizing the threats to the functioning of reefs from fishing, climate change, and decreasing water quality, we highlight that these stressors have multiple, conflicting effects on functionally similar groups of species and their interactions, and that the overall effects are often uncertain because of a lack of data or variability among taxa. The direct effects of stressors on links among functional groups, such as predator-prey interactions, are particularly uncertain. Using qualitative modeling, we demonstrate that this uncertainty of stressor impacts on functional groups (whether they are positive, negative, or neutral) can have significant effects on models of ecosystem stability, and reducing uncertainty is vital for understanding changes to reef functioning. This review also provides guidance for future models of reef functioning, which should include interactions among functional groups and the cumulative effect of stressors.
Multiple Stressors and the Functioning of Coral Reefs
NASA Astrophysics Data System (ADS)
Harborne, Alastair R.; Rogers, Alice; Bozec, Yves-Marie; Mumby, Peter J.
2017-01-01
Coral reefs provide critical services to coastal communities, and these services rely on ecosystem functions threatened by stressors. By summarizing the threats to the functioning of reefs from fishing, climate change, and decreasing water quality, we highlight that these stressors have multiple, conflicting effects on functionally similar groups of species and their interactions, and that the overall effects are often uncertain because of a lack of data or variability among taxa. The direct effects of stressors on links among functional groups, such as predator-prey interactions, are particularly uncertain. Using qualitative modeling, we demonstrate that this uncertainty of stressor impacts on functional groups (whether they are positive, negative, or neutral) can have significant effects on models of ecosystem stability, and reducing uncertainty is vital for understanding changes to reef functioning. This review also provides guidance for future models of reef functioning, which should include interactions among functional groups and the cumulative effect of stressors.
Progress toward Consensus Estimates of Regional Glacier Mass Balances for IPCC AR5
NASA Astrophysics Data System (ADS)
Arendt, A. A.; Gardner, A. S.; Cogley, J. G.
2011-12-01
Glaciers are potentially large contributors to rising sea level. Since the last IPCC report in 2007 (AR4), there has been a widespread increase in the use of geodetic observations from satellite and airborne platforms to complement field observations of glacier mass balance, as well as significant improvements in the global glacier inventory. Here we summarize our ongoing efforts to integrate data from multiple sources to arrive at a consensus estimate for each region, and to quantify uncertainties in those estimates. We will use examples from Alaska to illustrate methods for combining Gravity Recovery and Climate Experiment (GRACE), elevation differencing and field observations into a single time series with related uncertainty estimates. We will pay particular attention to reconciling discrepancies between GRACE estimates from multiple processing centers. We will also investigate the extent to which improvements in the glacier inventory affect the accuracy of our regional mass balances.
Experiences and perspectives on the GIST patient journey.
Macdonald, Nancy; Shapiro, Ari; Bender, Christina; Paolantonio, Marc; Coombs, John
2012-01-01
The tyrosine kinase inhibitor (TKI) imatinib has improved outcomes for patients with unresectable or metastatic gastrointestinal stromal tumors (GIST), and for patients receiving adjuvant therapy following GIST resection. This qualitative study explored the experiences and emotions of patients through GIST diagnosis, treatment initiation, disease control, and in some patients, loss of response and therapy switch. Ethnographic investigations were conducted, including semi- structured qualitative interviews of patients with resected or metastatic/unresectable GIST and their caregivers, from Canada (n = 15); the United States (n = 10); and Brazil, France, Germany, Russia, and Spain (n = 5 each). Some interviewees also kept 7-day photo journals. Responses were qualitatively analyzed to identify gaps and unmet needs where communication about disease, treatments, and adherence could be effective. Patients shared common experiences during each stage of disease management (crisis, hope, adaptation, new normal, and uncertainty). Patients felt a sense of crisis during diagnosis, followed by hope upon TKI therapy initiation. Over time, they came to adapt to their new lives (new normal) with cancer. With each follow-up, patients confronted the uncertainty of becoming TKI resistant and the possible need to switch therapy. During uncertainty many patients sought new information regarding GIST. Cases of disease progression and drug switching caused patients to revert to crisis and restart their emotional journey. Patients with primary or unresectable/metastatic GIST shared similar journeys, especially regarding uncertainty, although differences in the scope and timing of phases were observed. Strategies patients used to remain adherent included obtaining family support, setting reminder mechanisms, taking medicine at routine times, and storing medicine in prominent places. Physicians and support staff can manage patient expectations and encourage adherence to therapy, which may facilitate optimal patient outcomes. Patient education about current GIST developments and adherence across all phases of the patient journey are of benefit.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
NASA Astrophysics Data System (ADS)
Aaij, R.; Adeva, B.; Adinolfi, M.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Alfonso Albero, A.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Archilli, F.; d'Argent, P.; Arnau Romeu, J.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Babuschkin, I.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baker, S.; Balagura, V.; Baldini, W.; Baranov, A.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Baryshnikov, F.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Beiter, A.; Bel, L. J.; Beliy, N.; Bellee, V.; Belloli, N.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Beranek, S.; Berezhnoy, A.; Bernet, R.; Berninghoff, D.; Bertholet, E.; Bertolin, A.; Betancourt, C.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bezshyiko, Ia.; Bifani, S.; Billoir, P.; Birnkraut, A.; Bitadze, A.; Bizzeti, A.; Bjørn, M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Boettcher, T.; Bondar, A.; Bondar, N.; Bonivento, W.; Bordyuzhin, I.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bossu, F.; Boubdir, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britton, T.; Brodzicka, J.; Brundu, D.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Byczynski, W.; Cadeddu, S.; Cai, H.; Calabrese, R.; Calladine, R.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D. H.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cavallero, G.; Cenci, R.; Chamont, D.; Chapman, M. G.; Charles, M.; Charpentier, Ph.; Chatzikonstantinidis, G.; Chefdeville, M.; Chen, S.; Cheung, S. F.; Chitic, S.-G.; Chobanova, V.; Chrzaszcz, M.; Chubykin, A.; Ciambrone, P.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collins, P.; Colombo, T.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombs, G.; Coquereau, S.; Corti, G.; Corvo, M.; Costa Sobral, C. M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Currie, R.; D'Ambrosio, C.; Da Cunha Marinho, F.; Dall'Occo, E.; Dalseno, J.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Serio, M.; De Simone, P.; Dean, C. T.; Decamp, D.; Del Buono, L.; Dembinski, H.-P.; Demmer, M.; Dendek, A.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Nezza, P.; Dijkstra, H.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Douglas, L.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dungs, K.; Durante, P.; Dzhelyadin, R.; Dziewiecki, M.; Dziurda, A.; Dzyuba, A.; Déléage, N.; Easo, S.; Ebert, M.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Farley, N.; Farry, S.; Fay, R.; Fazzini, D.; Federici, L.; Ferguson, D.; Fernandez, G.; Fernandez Declara, P.; Fernandez Prieto, A.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fini, R. A.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Franco Lima, V.; Frank, M.; Frei, C.; Fu, J.; Funk, W.; Furfaro, E.; Färber, C.; Gabriel, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Martin, L. M.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Garsed, P. J.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianı, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gizdov, K.; Gligorov, V. V.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gorelov, I. V.; Gotti, C.; Govorkova, E.; Grabowski, J. P.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greim, R.; Griffith, P.; Grillo, L.; Gruber, L.; Gruberg Cazon, B. R.; Grünberg, O.; Gushchin, E.; Guz, Yu.; Gys, T.; Göbel, C.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hamilton, B.; Han, X.; Hancock, T. H.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hasse, C.; Hatch, M.; He, J.; Hecker, M.; Heinicke, K.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hombach, C.; Hopchev, P. H.; Huard, Z. C.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hutchcroft, D.; Ibis, P.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jalocha, J.; Jans, E.; Jawahery, A.; Jiang, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Karacson, M.; Kariuki, J. M.; Karodia, S.; Kazeev, N.; Kecke, M.; Kelsey, M.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Klimkovich, T.; Koliiev, S.; Kolpin, M.; Komarov, I.; Kopecna, R.; Koppenburg, P.; Kosmyntseva, A.; Kotriakhova, S.; Kozeiha, M.; Kravchuk, L.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Leflat, A.; Lefrançois, J.; Lefèvre, R.; Lemaitre, F.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, P.-R.; Li, T.; Li, Y.; Li, Z.; Likhomanenko, T.; Lindner, R.; Lionetto, F.; Liu, X.; Loh, D.; Loi, A.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Lyu, X.; Machefert, F.; Maciuc, F.; Macko, V.; Mackowiak, P.; Maddrell-Mander, S.; Maev, O.; Maguire, K.; Maisuzenko, D.; Majewski, M. W.; Malde, S.; Malinin, A.; Maltsev, T.; Manca, G.; Mancinelli, G.; Manning, P.; Marangotto, D.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marinangeli, M.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurice, E.; Maurin, B.; Mazurov, A.; McCann, M.; McNab, A.; McNulty, R.; Mead, J. V.; Meadows, B.; Meaux, C.; Meier, F.; Meinert, N.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Millard, E.; Minard, M.-N.; Minzoni, L.; Mitzel, D. S.; Mogini, A.; Molina Rodriguez, J.; Mombächer, T.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morello, M. J.; Morgunova, O.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mulder, M.; Mussini, M.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, T. D.; Nguyen-Mau, C.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Nogay, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Oldeman, R.; Onderwater, C. J. G.; Ossowska, A.; Otalora Goicochea, J. M.; Owen, P.; Oyanguren, A.; Pais, P. R.; Palano, A.; Palutan, M.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Parker, W.; Parkes, C.; Passaleva, G.; Pastore, A.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petrov, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pisani, F.; Pistone, A.; Piucci, A.; Placinta, V.; Playfer, S.; Plo Casasus, M.; Polci, F.; Poli Lener, M.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Pomery, G. J.; Ponce, S.; Popov, A.; Popov, D.; Poslavskii, S.; Potterat, C.; Price, E.; Prisciandaro, J.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Pullen, H.; Punzi, G.; Qian, W.; Quagliani, R.; Quintana, B.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Ratnikov, F.; Raven, G.; Ravonel Salzgeber, M.; Reboud, M.; Redi, F.; Reichert, S.; dos Reis, A. C.; Remon Alepuz, C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Rollings, A.; Romanovskiy, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rudolph, M. S.; Ruf, T.; Ruiz Valls, P.; Ruiz Vidal, J.; Saborido Silva, J. J.; Sadykhov, E.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarpis, G.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schellenberg, M.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schreiner, H. F.; Schubert, K.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Simone, S.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Soares Lavra, l.; Sokoloff, M. D.; Soler, F. J. P.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefko, P.; Stefkova, S.; Steinkamp, O.; Stemmle, S.; Stenyakin, O.; Stepanova, M.; Stevens, H.; Stone, S.; Storaci, B.; Stracka, S.; Stramaglia, M. E.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; Szymanski, M.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, E.; van Tilburg, J.; Tilley, M. J.; Tisserand, V.; Tobin, M.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Toriello, F.; Tourinho Jadallah Aoude, R.; Tournefier, E.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tully, A.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagner, A.; Vagnoni, V.; Valassi, A.; Valat, S.; Valenti, G.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Venkateswaran, A.; Verlage, T. A.; Vernet, M.; Vesterinen, M.; Viana Barbosa, J. V.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Viemann, H.; Vilasis-Cardona, X.; Vitti, M.; Volkov, V.; Vollhardt, A.; Voneki, B.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Vázquez Sierra, C.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Wark, H. M.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Winn, M.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yang, Z.; Yao, Y.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zarebski, K. A.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zheng, Y.; Zhu, X.; Zhukov, V.; Zonneveld, J. B.; Zucchelli, S.; LHCb Collaboration
2018-04-01
The ratio of branching fractions R (D*-)≡B (B0→D*-τ+ντ)/B (B0→D*-μ+νμ) is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb-1. The τ lepton is reconstructed with three charged pions in the final state. A novel method is used that exploits the different vertex topologies of signal and backgrounds to isolate samples of semitauonic decays of b hadrons with high purity. Using the B0→D*-π+π-π+ decay as the normalization channel, the ratio B (B0→D*-τ+ντ)/B (B0→D*-π+π-π+) is measured to be 1.97 ±0.13 ±0.18 , where the first uncertainty is statistical and the second systematic. An average of branching fraction measurements for the normalization channel is used to derive B (B0→D*-τ+ντ)=(1.42 ±0.094 ±0.129 ±0.054 )% , where the third uncertainty is due to the limited knowledge of B (B0→D*-π+π-π+). A test of lepton flavor universality is performed using the well-measured branching fraction B (B0→D*-μ+νμ) to compute R (D*-)=0.291 ±0.019 ±0.026 ±0.013 , where the third uncertainty originates from the uncertainties on B (B0→D*-π+π-π+) and B (B0→D*-μ+νμ). This measurement is in agreement with the Standard Model prediction and with previous measurements.
Potential New Lidar Observations for Cloud Studies
NASA Technical Reports Server (NTRS)
Winker, Dave; Hu, Yong; Narir, Amin; Cai, Xia
2015-01-01
The response of clouds to global warming represents a major uncertainty in estimating climate sensitivity. These uncertainties have been tracked to shallow marine clouds in the tropics and subtropics. CALIOP observations have already been used extensively to evaluate model predictions of shallow cloud fraction and top height (Leahy et al. 2013; Nam et al 2012). Tools are needed to probe the lowest levels of the troposphere. The large footprint of satellite lidars gives large multiple scattering from clouds which presents new possibilities for cloud retrievals to constrain model predictions.