Dispersion analysis for baseline reference mission 2
NASA Technical Reports Server (NTRS)
Snow, L. S.
1975-01-01
A dispersion analysis considering uncertainties (or perturbations) in platform, vehicle, and environmental parameters was performed for baseline reference mission (BRM) 2. The dispersion analysis is based on the nominal trajectory for BRM 2. The analysis was performed to determine state vector and performance dispersions (or variations) which result from the indicated uncertainties. The dispersions are determined at major mission events and fixed times from liftoff (time slices). The dispersion results will be used to evaluate the capability of the vehicle to perform the mission within a specified level of confidence and to determine flight performance reserves.
NASA Astrophysics Data System (ADS)
Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.
2016-12-01
Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Lognormal Uncertainty Estimation for Failure Rates
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
NASA Technical Reports Server (NTRS)
Kuhn, A. E.
1975-01-01
A dispersion analysis considering 3 sigma uncertainties (or perturbations) in platform, vehicle, and environmental parameters was performed for the baseline reference mission (BRM) 1 of the space shuttle orbiter. The dispersion analysis is based on the nominal trajectory for the BRM 1. State vector and performance dispersions (or variations) which result from the indicated 3 sigma uncertainties were studied. The dispersions were determined at major mission events and fixed times from lift-off (time slices) and the results will be used to evaluate the capability of the vehicle to perform the mission within a 3 sigma level of confidence and to determine flight performance reserves. A computer program is given that was used for dynamic flight simulations of the space shuttle orbiter.
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul; Al Hassan, Mohammad; Ring, Robert
2017-01-01
Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Six Degree-of-Freedom Entry Dispersion Analysis for the METEOR Recovery Module
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Braun, Robert D.; Powell, Richard W.; Engelund, Walter C.; Tartabini, Paul V.
1996-01-01
The present study performs a six degree-of-freedom entry dispersion analysis for the Multiple Experiment Transporter to Earth Orbit and Return (METEOR) mission. METEOR offered the capability of flying a recoverable science package in a microgravity environment. However, since the Recovery Module has no active control system, an accurate determination of the splashdown position is difficult because no opportunity exists to remove any errors. Hence, uncertainties in the initial conditions prior to deorbit burn initiation, during deorbit burn and exo-atmospheric coast phases, and during atmospheric flight impact the splashdown location. This investigation was undertaken to quantify the impact of the various exo-atmospheric and atmospheric uncertainties. Additionally, a Monte-Carlo analysis was performed to statistically assess the splashdown dispersion footprint caused by the multiple mission uncertainties. The Monte-Carlo analysis showed that a 3-sigma splashdown dispersion footprint with axes of 43.3 nm (long), -33.5 nm (short), and 10.0 nm (crossrange) can be constructed. A 58% probability exists that the Recovery Module will overshoot the nominal splashdown site.
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
NASA Astrophysics Data System (ADS)
Kim, A. G.
2011-02-01
I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, F.T.; Young, M.L.; Miller, L.A.
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less
Substructure Versus Property-Level Dispersed Modes Calculation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Peck, Jeff A.; Bush, T. Jason; Fulcher, Clay W.
2016-01-01
This paper calculates the effect of perturbed finite element mass and stiffness values on the eigenvectors and eigenvalues of the finite element model. The structure is perturbed in two ways: at the "subelement" level and at the material property level. In the subelement eigenvalue uncertainty analysis the mass and stiffness of each subelement is perturbed by a factor before being assembled into the global matrices. In the property-level eigenvalue uncertainty analysis all material density and stiffness parameters of the structure are perturbed modified prior to the eigenvalue analysis. The eigenvalue and eigenvector dispersions of each analysis (subelement and property-level) are also calculated using an analytical sensitivity approximation. Two structural models are used to compare these methods: a cantilevered beam model, and a model of the Space Launch System. For each structural model it is shown how well the analytical sensitivity modes approximate the exact modes when the uncertainties are applied at the subelement level and at the property level.
NASA Technical Reports Server (NTRS)
Butler, C. F.
1979-01-01
A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.
Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...
A New Aerodynamic Data Dispersion Method for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T.
2011-01-01
A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
NASA Astrophysics Data System (ADS)
Lach, Zbigniew T.
2017-08-01
A possibility is shown of a non-disruptive estimation of chromatic dispersion in a fiber of an intensity modulation communication line under work conditions. Uncertainty of the chromatic dispersion estimates is analyzed and quantified with the use of confidence intervals.
Sanyal, Doyeli; Rani, Anita; Alam, Samsul; Gujral, Seema; Gupta, Ruchi
2011-11-01
Simple and efficient multi-residue analytical methods were developed and validated for the determination of 13 organochlorine and 17 organophosphorous pesticides from soil, spinach and eggplant. Techniques namely accelerated solvent extraction and dispersive SPE were used for sample preparations. The recovery studies were carried out by spiking the samples at three concentration levels (1 limit of quantification (LOQ), 5 LOQ, and 10 LOQ). The methods were subjected to a thorough validation procedure. The mean recovery for soil, spinach and eggplant were in the range of 70-120% with median CV (%) below 10%. The total uncertainty was evaluated taking four main independent sources viz., weighing, purity of the standard, GC calibration curve and repeatability under consideration. The expanded uncertainty was well below 10% for most of the pesticides and the rest fell in the range of 10-20%.
Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.
Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan
2016-07-01
This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
NASA Technical Reports Server (NTRS)
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
Code System for Performance Assessment Ground-water Analysis for Low-level Nuclear Waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MATTHEW,; KOZAK, W.
1994-02-09
Version 00 The PAGAN code system is a part of the performance assessment methodology developed for use by the U. S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1 has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simplemore » ground-water transport analysis and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time- and location-dependent radionuclide concentration at a well in the aquifer, or a time- and location-dependent radionuclide flux into a surface-water body.« less
NASA Astrophysics Data System (ADS)
Al-Hashimi, M. H.; Wiese, U.-J.
2009-12-01
We consider wave packets of free particles with a general energy-momentum dispersion relation E(p). The spreading of the wave packet is determined by the velocity v=∂pE. The position-velocity uncertainty relation ΔxΔv⩾12|<∂p2E>| is saturated by minimal uncertainty wave packets Φ(p)=Aexp(-αE(p)+βp). In addition to the standard minimal Gaussian wave packets corresponding to the non-relativistic dispersion relation E(p)=p2/2m, analytic calculations are presented for the spreading of wave packets with minimal position-velocity uncertainty product for the lattice dispersion relation E(p)=-cos(pa)/ma2 as well as for the relativistic dispersion relation E(p)=p2+m2. The boost properties of moving relativistic wave packets as well as the propagation of wave packets in an expanding Universe are also discussed.
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
NASA Astrophysics Data System (ADS)
Dong, Jingtao; Lu, Rongsheng
2018-04-01
The principle of retrieving the thickness and refractive index dispersion of a parallel glass plate is reported based on single interferogram recording and phase analysis. With the parallel plate illuminated by a convergent light sheet, the transmitted light interfering in both spectral and angular domains is recorded. The phase recovered from the single interferogram by Fourier analysis is used to retrieve the thickness and refractive index dispersion without periodic ambiguity. Experimental results of an optical substrate standard show that the accuracy of refractive index dispersion is less than 2.5 × 10-5 and the relative uncertainty of thickness is 6 × 10-5 (3σ). This method is confirmed to be robust against the intensity noises, indicating the capability of stable and accurate measurement.
Trajectory-Based Loads for the Ares I-X Test Flight Vehicle
NASA Technical Reports Server (NTRS)
Vause, Roland F.; Starr, Brett R.
2011-01-01
In trajectory-based loads, the structural engineer treats each point on the trajectory as a load case. Distributed aero, inertial, and propulsion forces are developed for the structural model which are equivalent to the integrated values of the trajectory model. Free-body diagrams are then used to solve for the internal forces, or loads, that keep the applied aero, inertial, and propulsion forces in dynamic equilibrium. There are several advantages to using trajectory-based loads. First, consistency is maintained between the integrated equilibrium equations of the trajectory analysis and the distributed equilibrium equations of the structural analysis. Second, the structural loads equations are tied to the uncertainty model for the trajectory systems analysis model. Atmosphere, aero, propulsion, mass property, and controls uncertainty models all feed into the dispersions that are generated for the trajectory systems analysis model. Changes in any of these input models will affect structural loads response. The trajectory systems model manages these inputs as well as the output from the structural model over thousands of dispersed cases. Large structural models with hundreds of thousands of degrees of freedom would execute too slowly to be an efficient part of several thousand system analyses. Trajectory-based loads provide a means for the structures discipline to be included in the integrated systems analysis. Successful applications of trajectory-based loads methods for the Ares I-X vehicle are covered in this paper. Preliminary design loads were based on 2000 trajectories using Monte Carlo dispersions. Range safety loads were tied to 8423 malfunction turn trajectories. In addition, active control system loads were based on 2000 preflight trajectories using Monte Carlo dispersions.
Using demography and movement behavior to predict range expansion of the southern sea otter.
Tinker, M.T.; Doak, D.F.; Estes, J.A.
2008-01-01
In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.
Lowe, Winsor H; McPeek, Mark A
2014-08-01
Dispersal is difficult to quantify and often treated as purely stochastic and extrinsically controlled. Consequently, there remains uncertainty about how individual traits mediate dispersal and its ecological effects. Addressing this uncertainty is crucial for distinguishing neutral versus non-neutral drivers of community assembly. Neutral theory assumes that dispersal is stochastic and equivalent among species. This assumption can be rejected on principle, but common research approaches tacitly support the 'neutral dispersal' assumption. Theory and empirical evidence that dispersal traits are under selection should be broadly integrated in community-level research, stimulating greater scrutiny of this assumption. A tighter empirical connection between the ecological and evolutionary forces that shape dispersal will enable richer understanding of this fundamental process and its role in community assembly. Copyright © 2014 Elsevier Ltd. All rights reserved.
Uncertainty of Comparative Judgments and Multidimensional Structure
ERIC Educational Resources Information Center
Sjoberg, Lennart
1975-01-01
An analysis of preferences with respect to silhouette drawings of nude females is presented. Systematic intransitivities were discovered. The dispersions of differences (comparatal dispersons) were shown to reflect the multidimensional structure of the stimuli, a finding expected on the basis of prior work. (Author)
NASA Astrophysics Data System (ADS)
Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.
2009-04-01
Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
NASA Astrophysics Data System (ADS)
Kheiri, R.
2016-09-01
As an undergraduate exercise, in an article (2012 Am. J. Phys. 80 780-14), quantum and classical uncertainties for dimensionless variables of position and momentum were evaluated in three potentials: infinite well, bouncing ball, and harmonic oscillator. While original quantum uncertainty products depend on {{\\hslash }} and the number of states (n), a dimensionless approach makes the comparison between quantum uncertainty and classical dispersion possible by excluding {{\\hslash }}. But the question is whether the uncertainty still remains dependent on quantum number n. In the above-mentioned article, there lies this contrast; on the one hand, the dimensionless quantum uncertainty of the potential box approaches classical dispersion only in the limit of large quantum numbers (n\\to ∞ )—consistent with the correspondence principle. On the other hand, similar evaluations for bouncing ball and harmonic oscillator potentials are equal to their classical counterparts independent of n. This equality may hide the quantum feature of low energy levels. In the current study, we change the potential intervals in order to make them symmetric for the linear potential and non-symmetric for the quadratic potential. As a result, it is shown in this paper that the dimensionless quantum uncertainty of these potentials in the new potential intervals is expressed in terms of quantum number n. In other words, the uncertainty requires the correspondence principle in order to approach the classical limit. Therefore, it can be concluded that the dimensionless analysis, as a useful pedagogical method, does not take away the quantum feature of the n-dependence of quantum uncertainty in general. Moreover, our numerical calculations include the higher powers of the position for the potentials.
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
Dispersive approach to two-photon exchange in elastic electron-proton scattering
Blunden, P. G.; Melnitchouk, W.
2017-06-14
We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.
NASA Astrophysics Data System (ADS)
Rodriguez Pretelin (1), Abelardo; Nowak (1), Wolfgang
2017-04-01
Well head protection areas (WHPAs) are frequently used as safety measures for drinking water wells, preventing them from being polluted by restricting land use activities in their proximities. Two sources of uncertainty are involved during delineation: 1) uncertainty in aquifer parameters and 2) time-varying groundwater flow scenarios and their own inherent uncertainties. The former has been studied by Enzenhoefer et al (2012 [1] and 2014 [2]) as probabilistic risk version of WHPA delineation. The latter is frequently neglected and replaced by steady-state assumptions; thereby ignoring time-variant flow conditions triggered either by anthropogenic causes or climatic conditions. In this study we analyze the influence of transient flow considerations in WHPA delineation, following annual seasonality behavior; with transiency represented by four transient conditions: (I) regional groundwater flow direction, (II) strength of the regional hydraulic gradient, (III) natural recharge to the groundwater and (IV) pumping rate. Addressing WHPA delineation in transient flow scenarios is computationally expensive. Thus, we develop an efficient method using a dynamic superposition of steady-state flow solutions coupled with a reversed formulation of advective-dispersive transport based on a Lagrangian particle tracking with continuous injection. This analysis results in a time-frequency map of pixel-wise membership to the well catchment. Additional to transient flow conditions, we recognize two sources of uncertainty, inexact knowledge of transient drivers and parameters. The uncertainties are accommodated through Monte Carlo simulation. With the help of a global sensitivity analysis, we investigate the impact of transiency in WHPA solutions. In particular, we evaluate: (1) Among all considered transients, which ones are the most influential. (2) How influential in WHPA delineation is the transience-related uncertainty compared to aquifer parameter uncertainty. Literature [1] R. Enzenhoefer, W. Nowak, and R. Helmig. Probabilistic exposure risk assessment with advective-dispersive well vulnerability criteria. Advances in Water Resources, 36:121-132, 2012. [2] R. Enzenhoefer, T. Bunk, and W. Nowak. Nine steps to risk-informed wellhead protection and management: a case study. Ground water, 52:161-174, 2014.
Ensemble Simulation of the Atmospheric Radionuclides Discharged by the Fukushima Nuclear Accident
NASA Astrophysics Data System (ADS)
Sekiyama, Thomas; Kajino, Mizuo; Kunii, Masaru
2013-04-01
Enormous amounts of radionuclides were discharged into the atmosphere by a nuclear accident at the Fukushima Daiichi nuclear power plant (FDNPP) after the earthquake and tsunami on 11 March 2011. The radionuclides were dispersed from the power plant and deposited mainly over eastern Japan and the North Pacific Ocean. A lot of numerical simulations of the radionuclide dispersion and deposition had been attempted repeatedly since the nuclear accident. However, none of them were able to perfectly simulate the distribution of dose rates observed after the accident over eastern Japan. This was partly due to the error of the wind vectors and precipitations used in the numerical simulations; unfortunately, their deterministic simulations could not deal with the probability distribution of the simulation results and errors. Therefore, an ensemble simulation of the atmospheric radionuclides was performed using the ensemble Kalman filter (EnKF) data assimilation system coupled with the Japan Meteorological Agency (JMA) non-hydrostatic mesoscale model (NHM); this mesoscale model has been used operationally for daily weather forecasts by JMA. Meteorological observations were provided to the EnKF data assimilation system from the JMA operational-weather-forecast dataset. Through this ensemble data assimilation, twenty members of the meteorological analysis over eastern Japan from 11 to 31 March 2011 were successfully obtained. Using these meteorological ensemble analysis members, the radionuclide behavior in the atmosphere such as advection, convection, diffusion, dry deposition, and wet deposition was simulated. This ensemble simulation provided the multiple results of the radionuclide dispersion and distribution. Because a large ensemble deviation indicates the low accuracy of the numerical simulation, the probabilistic information is obtainable from the ensemble simulation results. For example, the uncertainty of precipitation triggered the uncertainty of wet deposition; the uncertainty of wet deposition triggered the uncertainty of atmospheric radionuclide amounts. Then the remained radionuclides were transported downwind; consequently the uncertainty signal of the radionuclide amounts was propagated downwind. The signal propagation was seen in the ensemble simulation by the tracking of the large deviation areas of radionuclide concentration and deposition. These statistics are able to provide information useful for the probabilistic prediction of radionuclides.
NASA Astrophysics Data System (ADS)
Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.
2002-05-01
Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blunden, P. G.; Melnitchouk, W.
We examine the two-photon exchange corrections to elastic electron-nucleon scattering within a dispersive approach, including contributions from both nucleon and Δ intermediate states. The dispersive analysis avoids off-shell uncertainties inherent in traditional approaches based on direct evaluation of loop diagrams, and guarantees the correct unitary behavior in the high energy limit. Using empirical information on the electromagnetic nucleon elastic and NΔ transition form factors, we compute the two-photon exchange corrections both algebraically and numerically. Finally, results are compared with recent measurements of e + p to e - p cross section ratios from the CLAS, VEPP-3 and OLYMPUS experiments.
Newsvendor problem under complete uncertainty: a case of innovative products.
Gaspars-Wieloch, Helena
2017-01-01
The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, M.S.Y.
1990-12-01
The PAGAN code system is a part of the performance assessment methodology developed for use by the U.S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1. has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simple ground-water transport analysismore » and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time and location-dependent radionuclide concentration at a well in the aquifer, or a time and location-dependent radionuclide flux into a surface-water body.« less
A systematic uncertainty analysis for liner impedance eduction technology
NASA Astrophysics Data System (ADS)
Zhou, Lin; Bodén, Hans
2015-11-01
The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
Optical spectroscopy and velocity dispersions of galaxy clusters from the SPT-SZ survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruel, J.; Bayliss, M.; Bazin, G.
2014-09-01
We present optical spectroscopy of galaxies in clusters detected through the Sunyaev-Zel'dovich (SZ) effect with the South Pole Telescope (SPT). We report our own measurements of 61 spectroscopic cluster redshifts, and 48 velocity dispersions each calculated with more than 15 member galaxies. This catalog also includes 19 dispersions of SPT-observed clusters previously reported in the literature. The majority of the clusters in this paper are SPT-discovered; of these, most have been previously reported in other SPT cluster catalogs, and five are reported here as SPT discoveries for the first time. By performing a resampling analysis of galaxy velocities, we findmore » that unbiased velocity dispersions can be obtained from a relatively small number of member galaxies (≲ 30), but with increased systematic scatter. We use this analysis to determine statistical confidence intervals that include the effect of membership selection. We fit scaling relations between the observed cluster velocity dispersions and mass estimates from SZ and X-ray observables. In both cases, the results are consistent with the scaling relation between velocity dispersion and mass expected from dark-matter simulations. We measure a ∼30% log-normal scatter in dispersion at fixed mass, and a ∼10% offset in the normalization of the dispersion-mass relation when compared to the expectation from simulations, which is within the expected level of systematic uncertainty.« less
The Value of Learning about Natural History in Biodiversity Markets
Bruggeman, Douglas J.
2015-01-01
Markets for biodiversity have generated much controversy because of the often unstated and untested assumptions included in transactions rules. Simple trading rules are favored to reduce transaction costs, but others have argued that this leads to markets that favor development and erode biodiversity. Here, I describe how embracing complexity and uncertainty within a tradable credit system for the Red-cockaded Woodpecker (Picoides borealis) creates opportunities to achieve financial and conservation goals simultaneously. Reversing the effects of habitat fragmentation is one of the main reasons for developing markets. I include uncertainty in habitat fragmentation effects by evaluating market transactions using five alternative dispersal models that were able to approximate observed patterns of occupancy and movement. Further, because dispersal habitat is often not included in market transactions, I contrast how changes in breeding versus dispersal habitat affect credit values. I use an individually-based, spatially-explicit population model for the Red-cockaded Woodpecker (Picoides borealis) to predict spatial- and temporal- influences of landscape change on species occurrence and genetic diversity. Results indicated that the probability of no net loss of abundance and genetic diversity responded differently to the transient dynamics in breeding and dispersal habitat. Trades that do not violate the abundance cap may simultaneously violate the cap for the erosion of genetic diversity. To highlight how economic incentives may help reduce uncertainty, I demonstrate tradeoffs between the value of tradable credits and the value of information needed to predict the influence of habitat trades on population viability. For the trade with the greatest uncertainty regarding the change in habitat fragmentation, I estimate that the value of using 13-years of data to reduce uncertainty in dispersal behaviors is $6.2 million. Future guidance for biodiversity markets should at least encourage the use of spatially- and temporally-explicit techniques that include population genetic estimates and the influence of uncertainty. PMID:26675488
The Value of Learning about Natural History in Biodiversity Markets.
Bruggeman, Douglas J
2015-01-01
Markets for biodiversity have generated much controversy because of the often unstated and untested assumptions included in transactions rules. Simple trading rules are favored to reduce transaction costs, but others have argued that this leads to markets that favor development and erode biodiversity. Here, I describe how embracing complexity and uncertainty within a tradable credit system for the Red-cockaded Woodpecker (Picoides borealis) creates opportunities to achieve financial and conservation goals simultaneously. Reversing the effects of habitat fragmentation is one of the main reasons for developing markets. I include uncertainty in habitat fragmentation effects by evaluating market transactions using five alternative dispersal models that were able to approximate observed patterns of occupancy and movement. Further, because dispersal habitat is often not included in market transactions, I contrast how changes in breeding versus dispersal habitat affect credit values. I use an individually-based, spatially-explicit population model for the Red-cockaded Woodpecker (Picoides borealis) to predict spatial- and temporal- influences of landscape change on species occurrence and genetic diversity. Results indicated that the probability of no net loss of abundance and genetic diversity responded differently to the transient dynamics in breeding and dispersal habitat. Trades that do not violate the abundance cap may simultaneously violate the cap for the erosion of genetic diversity. To highlight how economic incentives may help reduce uncertainty, I demonstrate tradeoffs between the value of tradable credits and the value of information needed to predict the influence of habitat trades on population viability. For the trade with the greatest uncertainty regarding the change in habitat fragmentation, I estimate that the value of using 13-years of data to reduce uncertainty in dispersal behaviors is $6.2 million. Future guidance for biodiversity markets should at least encourage the use of spatially- and temporally-explicit techniques that include population genetic estimates and the influence of uncertainty.
Mars Exploration Rovers Landing Dispersion Analysis
NASA Technical Reports Server (NTRS)
Knocke, Philip C.; Wawrzyniak, Geoffrey G.; Kennedy, Brian M.; Desai, Prasun N.; Parker, TImothy J.; Golombek, Matthew P.; Duxbury, Thomas C.; Kass, David M.
2004-01-01
Landing dispersion estimates for the Mars Exploration Rover missions were key elements in the site targeting process and in the evaluation of landing risk. This paper addresses the process and results of the landing dispersion analyses performed for both Spirit and Opportunity. The several contributors to landing dispersions (navigation and atmospheric uncertainties, spacecraft modeling, winds, and margins) are discussed, as are the analysis tools used. JPL's MarsLS program, a MATLAB-based landing dispersion visualization and statistical analysis tool, was used to calculate the probability of landing within hazardous areas. By convolving this with the probability of landing within flight system limits (in-spec landing) for each hazard area, a single overall measure of landing risk was calculated for each landing ellipse. In-spec probability contours were also generated, allowing a more synoptic view of site risks, illustrating the sensitivity to changes in landing location, and quantifying the possible consequences of anomalies such as incomplete maneuvers. Data and products required to support these analyses are described, including the landing footprints calculated by NASA Langley's POST program and JPL's AEPL program, cartographically registered base maps and hazard maps, and flight system estimates of in-spec landing probabilities for each hazard terrain type. Various factors encountered during operations, including evolving navigation estimates and changing atmospheric models, are discussed and final landing points are compared with approach estimates.
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Koornneef, Joris; Spruijt, Mark; Molag, Menso; Ramírez, Andrea; Turkenburg, Wim; Faaij, André
2010-05-15
A systematic assessment, based on an extensive literature review, of the impact of gaps and uncertainties on the results of quantitative risk assessments (QRAs) for CO(2) pipelines is presented. Sources of uncertainties that have been assessed are: failure rates, pipeline pressure, temperature, section length, diameter, orifice size, type and direction of release, meteorological conditions, jet diameter, vapour mass fraction in the release and the dose-effect relationship for CO(2). A sensitivity analysis with these parameters is performed using release, dispersion and impact models. The results show that the knowledge gaps and uncertainties have a large effect on the accuracy of the assessed risks of CO(2) pipelines. In this study it is found that the individual risk contour can vary between 0 and 204 m from the pipeline depending on assumptions made. In existing studies this range is found to be between <1m and 7.2 km. Mitigating the relevant risks is part of current practice, making them controllable. It is concluded that QRA for CO(2) pipelines can be improved by validation of release and dispersion models for high-pressure CO(2) releases, definition and adoption of a universal dose-effect relationship and development of a good practice guide for QRAs for CO(2) pipelines. Copyright (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Connor, C.; Connor, L.; White, J.
2015-12-01
Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.
Sommerfreund, J; Arhonditsis, G B; Diamond, M L; Frignani, M; Capodaglio, G; Gerino, M; Bellucci, L; Giuliani, S; Mugnai, C
2010-03-01
A Monte Carlo analysis is used to quantify environmental parametric uncertainty in a multi-segment, multi-chemical model of the Venice Lagoon. Scientific knowledge, expert judgment and observational data are used to formulate prior probability distributions that characterize the uncertainty pertaining to 43 environmental system parameters. The propagation of this uncertainty through the model is then assessed by a comparative analysis of the moments (central tendency, dispersion) of the model output distributions. We also apply principal component analysis in combination with correlation analysis to identify the most influential parameters, thereby gaining mechanistic insights into the ecosystem functioning. We found that modeled concentrations of Cu, Pb, OCDD/F and PCB-180 varied by up to an order of magnitude, exhibiting both contaminant- and site-specific variability. These distributions generally overlapped with the measured concentration ranges. We also found that the uncertainty of the contaminant concentrations in the Venice Lagoon was characterized by two modes of spatial variability, mainly driven by the local hydrodynamic regime, which separate the northern and central parts of the lagoon and the more isolated southern basin. While spatial contaminant gradients in the lagoon were primarily shaped by hydrology, our analysis also shows that the interplay amongst the in-place historical pollution in the central lagoon, the local suspended sediment concentrations and the sediment burial rates exerts significant control on the variability of the contaminant concentrations. We conclude that the probabilistic analysis presented herein is valuable for quantifying uncertainty and probing its cause in over-parameterized models, while some of our results can be used to dictate where additional data collection efforts should focus on and the directions that future model refinement should follow. (c) 2009 Elsevier Inc. All rights reserved.
Representing uncertainty in a spatial invasion model that incorporates human-mediated dispersal
Frank H. Koch; Denys Yemshanov; Robert A. Haack
2013-01-01
Most modes of human-mediated dispersal of invasive species are directional and vector-based. Classical spatial spread models usually depend on probabilistic dispersal kernels that emphasize distance over direction and have limited ability to depict rare but influential long-distance dispersal events. These aspects are problematic if such models are used to estimate...
Environmental Studies: Mathematical, Computational and Statistical Analyses
1993-03-03
mathematical analysis addresses the seasonally and longitudinally averaged circulation which is under the influence of a steady forcing located asymmetrically...employed, as has been suggested for some situations. A general discussion of how interfacial phenomena influence both the original contamination process...describing the large-scale advective and dispersive behaviour of contaminants transported by groundwater and the uncertainty associated with field-scale
Bothe, Jameson R.; Stein, Zachary W.; Al-Hashimi, Hashim M.
2014-01-01
Spin relaxation in the rotating frame (R1ρ) is a powerful NMR technique for characterizing fast microsecond timescale exchange processes directed toward short-lived excited states in biomolecules. At the limit of fast exchange, only kex = k1 + k−1 and Φıx = pGpE(Δω)2 can be determined from R1ρ data limiting the ability to characterize the structure and energetics of the excited state conformation. Here, we use simulations to examine the uncertainty with which exchange parameters can be determined for two state systems in intermediate-to-fast exchange using off-resonance R1ρ relaxation dispersion. R1ρ data computed by solving the Bloch-McConnell equations reveals small but significant asymmetry with respect to offset (R1ρ(ΔΩ) ≠ R1ρ(−ΔΩ)), which is a hallmark of slow-to-intermediate exchange, even under conditions of fast exchange for free precession chemical exchange line broadening (kex/Δω > 10). A grid search analysis combined with bootstrap and Monte-Carlo based statistical approaches for estimating uncertainty in exchange parameters reveals that both the sign and magnitude of Δω can be determined at a useful level of uncertainty for systems in fast exchange (kex/Δω < 10) but that this depends on the uncertainty in the R1ρ data and requires a thorough examination of the multidimensional variation of χ2 as a function of exchange parameters. Results from simulations are complemented by analysis of experimental R1ρ data measured in three nucleic acid systems with exchange processes occurring on the slow (kex/Δω = 0.2; pE = ~ 0.7%), fast (kex/Δω = ~10–16; pE = ~13%) and very fast (kex = 39,000 s−1) chemical shift timescales. PMID:24819426
Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models
NASA Astrophysics Data System (ADS)
Chang, Joseph C.
This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on observational data from areas outside of Iraq, and using the global fields simulated by the global meteorological models as the initial and boundary conditions for the mesoscale models. It was found that while comparing model predictions to observations in areas outside of Iraq, the predicted surface wind directions had errors between 30 to 90 deg, but the inter-model differences (or uncertainties) in the predicted surface wind directions inside Iraq, where there were no onsite data, were fairly constant at about 70 deg. (Abstract shortened by UMI.)
Wagner, Brian J.; Harvey, Judson W.
1997-01-01
Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-04-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-12-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.
Diffusion, Dispersion, and Uncertainty in Anisotropic Fractal Porous Media
NASA Astrophysics Data System (ADS)
Monnig, N. D.; Benson, D. A.
2007-12-01
Motivated by field measurements of aquifer hydraulic conductivity (K), recent techniques were developed to construct anisotropic fractal random fields, in which the scaling, or self-similarity parameter, varies with direction and is defined by a matrix. Ensemble numerical results are analyzed for solute transport through these 2-D "operator-scaling" fractional Brownian motion (fBm) ln(K) fields. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rates never exceed Mercado's (1967) purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-stratified growth must be the result of other demonstrable factors, such as initial plume size. The addition of large local dispersion and diffusion does not significantly change the effective longitudinal dispersivity of the plumes. In the presence of significant local dispersion or diffusion, the concentration coefficient of variation CV={σc}/{\\langle c \\rangle} remains large at the leading edge of the plumes. This indicates that even with considerable mixing due to dispersion or diffusion, there is still substantial uncertainty in the leading edge of a plume moving in fractal porous media.
NASA Astrophysics Data System (ADS)
Shen, W.; Schulte-Pelkum, V.; Ritzwoller, M. H.
2011-12-01
The joint inversion of surface wave dispersion and receiver functions was proven feasible on a station by station basis more than a decade ago. Joint application to a large number of stations across a broad region such as western US is more challenging, however, because of the different resolutions of the two methods. Improvements in resolution in surface wave studies derived from ambient noise and array-based methods applied to earthquake data now allow surface wave dispersion and receiver functions to be inverted simultaneously across much of the Earthscope/USArray Transportable Array (TA), and we have developed a Monte-Carlo procedure for this purpose. As a proof of concept we applied this procedure to a region containing 186 TA stations in the intermountain west, including a variety of tectonic settings such as the Colorado Plateau, the Basin and Range, the Rocky Mountains, and the Great Plains. This work has now been expanded to encompass all TA stations in the western US. Our approach includes three main components. (1) We enlarge the Earthscope Automated Receiver Survey (EARS) receiver function database by adding more events within a quality control procedure. A back-azimuth-independent receiver function and its associated uncertainties are constructed using a harmonic stripping algorithm. (2) Rayleigh wave dispersion curves are generated from the eikonal tomography applied to ambient noise cross-correlation data and Helmoholtz tomography applied to teleseismic surface wave data to yield dispersion maps from 8 sec to 80 sec period. (3) We apply a Metropolis Monte Carlo algorithm to invert for the average velocity structure beneath each station. Simple kriging is applied to interpolate to the discrete results into a continuous 3-D model. This method has now been applied to over 1,000 TA stations in the western US. We show that the receiver functions and surface wave dispersion data can be reconciled beneath more than 80% of the stations using a smooth parameterization of both crustal and uppermost mantle structure. After the inversion, a 3-D model for the crust and uppermost mantle to a depth of 150 km is constructed for this region. Compared with using surface wave data alone, uncertainty in crustal thickness is much lower and as a result, the lower crustal velocity is better constrained given a smaller depth-velocity trade-off. The new 3-D model including Moho depth with attendant uncertainties provides the basis for further analysis on radial anisotropy and geodynamics in the western US, and also forms a starting point for other seismological studies such as body wave tomography and receiver function CCP analysis.
The treatment of uncertainties in reactive pollution dispersion models at urban scales.
Tomlin, A S; Ziehn, T; Goodman, P; Tate, J E; Dixon, N S
2016-07-18
The ability to predict NO2 concentrations ([NO2]) within urban street networks is important for the evaluation of strategies to reduce exposure to NO2. However, models aiming to make such predictions involve the coupling of several complex processes: traffic emissions under different levels of congestion; dispersion via turbulent mixing; chemical processes of relevance at the street-scale. Parameterisations of these processes are challenging to quantify with precision. Predictions are therefore subject to uncertainties which should be taken into account when using models within decision making. This paper presents an analysis of mean [NO2] predictions from such a complex modelling system applied to a street canyon within the city of York, UK including the treatment of model uncertainties and their causes. The model system consists of a micro-scale traffic simulation and emissions model, and a Reynolds averaged turbulent flow model coupled to a reactive Lagrangian particle dispersion model. The analysis focuses on the sensitivity of predicted in-street increments of [NO2] at different locations in the street to uncertainties in the model inputs. These include physical characteristics such as background wind direction, temperature and background ozone concentrations; traffic parameters such as overall demand and primary NO2 fraction; as well as model parameterisations such as roughness lengths, turbulent time- and length-scales and chemical reaction rate coefficients. Predicted [NO2] is shown to be relatively robust with respect to model parameterisations, although there are significant sensitivities to the activation energy for the reaction NO + O3 as well as the canyon wall roughness length. Under off-peak traffic conditions, demand is the key traffic parameter. Under peak conditions where the network saturates, road-side [NO2] is relatively insensitive to changes in demand and more sensitive to the primary NO2 fraction. The most important physical parameter was found to be the background wind direction. The study highlights the key parameters required for reliable [NO2] estimations suggesting that accurate reference measurements for wind direction should be a critical part of air quality assessments for in-street locations. It also highlights the importance of street scale chemical processes in forming road-side [NO2], particularly for regions of high NOx emissions such as close to traffic queues.
Uncertainties in Emissions In Emissions Inputs for Near-Road Assessments
Emissions, travel demand, and dispersion models are all needed to obtain temporally and spatially resolved pollutant concentrations. Current methodology combines these three models in a bottom-up approach based on hourly traffic and emissions estimates, and hourly dispersion conc...
Sensitivity test and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy
NASA Astrophysics Data System (ADS)
Selva, J.; Costa, A.; De Natale, G.; Di Vito, M. A.; Isaia, R.; Macedonio, G.
2018-02-01
We present the results of a statistical study on tephra dispersal in the case of a reactivation of the Campi Flegrei volcano. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Excluding the lava emission, three classes are explosive (Small, Medium, and Large) and can produce a significant quantity of volcanic ash. Hazard assessments were made through simulations of atmospheric dispersion of ash and lapilli, considering the full variability of winds and eruptive vents. The results are presented in form of conditional hazard curves given the occurrence of specific eruptive sizes, representative members of each size class, and then combined to quantify the conditional hazard given an eruption of any size. The main focus of this analysis was to constrain the epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena), in order to provide unbiased hazard estimations. The epistemic uncertainty on the estimation of hazard curves was quantified, making use of scientifically acceptable alternatives to be aggregated in the final results. The choice of such alternative models was made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of submarine eruptive vents and tephra total grain-size distributions (TGSD) with a different relative mass fraction of fine ash, and the effect of ash aggregation. The results showed that the dominant uncertainty is related to the combined effect of the uncertainty with regard to the fraction of fine particles with respect to the total mass and on how ash aggregation is modelled. The latter is particularly relevant in the case of magma-water interactions during explosive eruptive phases, when a large fraction of fine ash can form accretionary lapilli that might contribute significantly in increasing the tephra load in the proximal areas. The variability induced by the use of different meteorological databases and the selected approach to modelling offshore eruptions were relatively insignificant. The uncertainty arising from the alternative implementations, which would have been neglected in standard (Bayesian) quantifications, were finally quantified by ensemble modelling, and represented by hazard and probability maps produced at different confidence levels.
Pion polarizabilities from a γ γ → π π analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Ling -Yun; Pennington, Michael R.
Here, we present results for pion polarizabilities predicted using dispersion relations from our earlier Amplitude Analysis of world data on two photon production of meson pairs. The helicity-zero polarizabilities are rather stable and insensitive to uncertainties in cross-channel exchanges. The need is first to confirm the recent result onmore » $$(\\alpha_1-\\beta_1)$$ for the charged pion by COMPASS at CERN to an accuracy of 10% by measuring the $$\\gamma\\gamma\\to\\pi^+\\pi^-$$ cross-section to an uncertainty of ~1\\%. Then the same polarizability, but for the $$\\pi^0$$, is fixed to be $$(\\alpha_1-\\beta_1)_{\\pi^0}=(0.9\\pm0.2)\\times 10^{-4}$$ fm$$^{3}$$. By analyzing the correlation between uncertainties in the meson polarizability and those in $$\\gamma\\gamma$$ cross-sections, we suggest experiments need to measure these cross-sections between $$\\sqrt{s}\\simeq 350$$ and 600~MeV. The $$\\pi^0\\pi^0$$ cross-section then makes the $$(\\alpha_2-\\beta_2)_{\\pi^0}$$ the easiest helicity-two polarizability to determine.« less
Pion polarizabilities from a γ γ → π π analysis
Dai, Ling -Yun; Pennington, Michael R.
2016-12-30
Here, we present results for pion polarizabilities predicted using dispersion relations from our earlier Amplitude Analysis of world data on two photon production of meson pairs. The helicity-zero polarizabilities are rather stable and insensitive to uncertainties in cross-channel exchanges. The need is first to confirm the recent result onmore » $$(\\alpha_1-\\beta_1)$$ for the charged pion by COMPASS at CERN to an accuracy of 10% by measuring the $$\\gamma\\gamma\\to\\pi^+\\pi^-$$ cross-section to an uncertainty of ~1\\%. Then the same polarizability, but for the $$\\pi^0$$, is fixed to be $$(\\alpha_1-\\beta_1)_{\\pi^0}=(0.9\\pm0.2)\\times 10^{-4}$$ fm$$^{3}$$. By analyzing the correlation between uncertainties in the meson polarizability and those in $$\\gamma\\gamma$$ cross-sections, we suggest experiments need to measure these cross-sections between $$\\sqrt{s}\\simeq 350$$ and 600~MeV. The $$\\pi^0\\pi^0$$ cross-section then makes the $$(\\alpha_2-\\beta_2)_{\\pi^0}$$ the easiest helicity-two polarizability to determine.« less
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jee, I.; Komatsu, E.; Suyu, S.H., E-mail: ijee@mpa-garching.mpg.de, E-mail: komatsu@mpa-garching.mpg.de, E-mail: suyu@asiaa.sinica.edu.tw
The distance-redshift relation plays a fundamental role in constraining cosmological models. In this paper, we show that measurements of positions and time delays of strongly lensed images of a background galaxy, as well as those of the velocity dispersion and mass profile of a lens galaxy, can be combined to extract the angular diameter distance of the lens galaxy. Physically, as the velocity dispersion and the time delay give a gravitational potential (GM/r) and a mass (GM) of the lens, respectively, dividing them gives a physical size (r) of the lens. Comparing the physical size with the image positions ofmore » a lensed galaxy gives the angular diameter distance to the lens. A mismatch between the exact locations at which these measurements are made can be corrected by measuring a local slope of the mass profile. We expand on the original idea put forward by Paraficz and Hjorth, who analyzed singular isothermal lenses, by allowing for an arbitrary slope of a power-law spherical mass density profile, an external convergence, and an anisotropic velocity dispersion. We find that the effect of external convergence cancels out when dividing the time delays and velocity dispersion measurements. We derive a formula for the uncertainty in the angular diameter distance in terms of the uncertainties in the observables. As an application, we use two existing strong lens systems, B1608+656 (z{sub L}=0.6304) and RXJ1131−1231 (z{sub L}=0.295), to show that the uncertainty in the inferred angular diameter distances is dominated by that in the velocity dispersion, σ{sup 2}, and its anisotropy. We find that the current data on these systems should yield about 16% uncertainty in D{sub A} per object. This improves to 13% when we measure σ{sup 2} at the so-called sweet-spot radius. Achieving 7% is possible if we can determine σ{sup 2} with 5% precision.« less
Identification of Preferential Groundwater Flow Pathways from Local Tracer Breakthrough Curves
NASA Astrophysics Data System (ADS)
Kokkinaki, A.; Sleep, B. E.; Dearden, R.; Wealthall, G.
2009-12-01
Characterizing preferential groundwater flow paths in the subsurface is a key factor in the design of in situ remediation technologies. When applying reaction-based remediation methods, such as enhanced bioremediation, preferential flow paths result in fast solute migration and potentially ineffective delivery of reactants, thereby adversely affecting treatment efficiency. The presence of such subsurface conduits was observed at the SABRe (Source Area Bioremediation) research site. Non-uniform migration of contaminants and electron donor during the field trials of enhanced bioremediation supported this observation. To better determine the spatial flow field of the heterogeneous aquifer, a conservative tracer test was conducted. Breakthrough curves were obtained at a reference plane perpendicular to the principal groundwater flow direction. The resulting dataset was analyzed using three different methods: peak arrival times, analytical solution fitting and moment analysis. Interpretation using the peak arrival time method indicated areas of fast plume migration. However, some of the high velocities are supported by single data points, thus adding considerable uncertainty to the estimated velocity distribution. Observation of complete breakthrough curves indicated different types of solute breakthrough, corresponding to different transport mechanisms. Sharp peaks corresponded to high conductivity preferential flow pathways, whereas more dispersed breakthrough curves with long tails were characteristic of significant dispersive mixing and dilution. While analytical solutions adequately quantified flow characteristics for the first type of curves, they failed to do so for the second type, in which case they gave unrealistic results. Therefore, a temporal moment analysis was performed to obtain complete spatial distributions of mass recovery, velocity and dispersivity. Though the results of moment analysis qualitatively agreed with the results of previous methods, more realistic estimates of velocities were obtained and the presence of one major preferential flow pathway was confirmed. However, low mass recovery and deviations from the 10% scaling rule for dispersivities indicate that insufficient spatial and temporal monitoring, as well as interpolation and truncation errors introduced uncertainty in the flow and transport parameters estimated by the method of moments. The results of the three analyses are valuable for enhancing the understanding of mass transport and remediation performance. Comparing the different interpretation methods, increasing the amount of concentration data considered in the analysis, the derived velocity fields were smoother and the estimated local velocities and dispersivities became more realistic. In conclusion, moment analysis is a method that represents a smoothed average of the velocity across the entire breakthrough curve, whereas the peak arrival time, which may be a less well constrained estimate, represents the physical peak arrival and typically yields a higher velocity than the moment analysis. This is an important distinction when applying the results of the tracer test to field sites.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Tactical Dispersal of Fighter Aircraft: Risk, Uncertainty, and Policy Recommendations.
1987-02-01
against armour and soft skinned vehicles, parked aircraft, and personnel, and are distributed evenly within the pattern... 147 bomblets are carried...initiated using such techniques as tone down paint schemes and camouflage netting. Active defenses have been enhanced. Patriot is replacing Nike , and...such as this. A0 vS I P l Il - 41 - V. ANALYSIS But we must ourselves take care not to acquire a Maginot dependence upon ground based static systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Forensic Uncertainty Quantification of Explosive Dispersal of Particles
NASA Astrophysics Data System (ADS)
Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho
2017-06-01
In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino
2018-02-22
CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.
Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian
2018-04-01
A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uncertainty Quantification of Multi-Phase Closures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Baglietto, Emilio
In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures.more » The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using Dakota) to study sensitivities of the CFD representation (STARCCM+) of uid velocity pro les and void fraction pro les in the context of Shaver and Podowski, 2015 correction to lift, and the Lubchenko et al., 2017 formulation of wall lubrication.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alarcón, J. M.; Hiller Blin, A. N.; Vicente Vacas, M. J.
2017-05-08
The baryon electromagnetic form factors are expressed in terms of two-dimensional densities describing the distribution of charge and magnetization in transverse space at fixed light-front time. In this paper, we calculate the transverse densities of the spin-1/2 flavor-octet baryons at peripheral distances b=O(Mmore » $$-1\\atop{π}$$) using methods of relativistic chiral effective field theory (χ EFT) and dispersion analysis. The densities are represented as dispersive integrals over the imaginary parts of the form factors in the timelike region (spectral functions). The isovector spectral functions on the two-pion cut t > 4 M$$2\\atop{π}$$ are calculated using relativistic χEFT including octet and decuplet baryons. The χEFT calculations are extended into the ρ meson mass region using an N/D method that incorporates the pion electromagnetic form factor data. The isoscalar spectral functions are modeled by vector meson poles. We compute the peripheral charge and magnetization densities in the octet baryon states, estimate the uncertainties, and determine the quark flavor decomposition. Finally, the approach can be extended to baryon form factors of other operators and the moments of generalized parton distributions.« less
Estimating species-specific suvival and movement when species identification is uncertain
Runge, J.P.; Hines, J.E.; Nichols, J.D.
2007-01-01
Incorporating uncertainty in the investigation of ecological studies has been the topic of an increasing body of research. In particular, mark?recapture methodology has shown that incorporating uncertainty in the probability of detecting individuals in populations enables accurate estimation of population-level processes such as survival, reproduction, and dispersal. Recent advances in mark?recapture methodology have included estimating population-level processes for biologically important groups despite the misassignment of individuals to those groups. Examples include estimating rates of apparent survival despite less than perfect accuracy when identifying individuals to gender or breeding state. Here we introduce a method for estimating apparent survival and dispersal in species that co-occur but that are difficult to distinguish. We use data from co-occurring populations of meadow voles (Microtus pennsylvanicus) and montane voles (M. montanus) in addition to simulated data to show that ignoring species uncertainty can lead to biased estimates of population processes. The incorporation of species uncertainty in mark?recapture studies should aid future research investigating ecological concepts such as interspecific competition, niche differentiation, and spatial population dynamics in sibling species.
Uncertainty, ensembles and air quality dispersion modeling: applications and challenges
NASA Astrophysics Data System (ADS)
Dabberdt, Walter F.; Miller, Erik
The past two decades have seen significant advances in mesoscale meteorological modeling research and applications, such as the development of sophisticated and now widely used advanced mesoscale prognostic models, large eddy simulation models, four-dimensional data assimilation, adjoint models, adaptive and targeted observational strategies, and ensemble and probabilistic forecasts. Some of these advances are now being applied to urban air quality modeling and applications. Looking forward, it is anticipated that the high-priority air quality issues for the near-to-intermediate future will likely include: (1) routine operational forecasting of adverse air quality episodes; (2) real-time high-level support to emergency response activities; and (3) quantification of model uncertainty. Special attention is focused here on the quantification of model uncertainty through the use of ensemble simulations. Application to emergency-response dispersion modeling is illustrated using an actual event that involved the accidental release of the toxic chemical oleum. Both surface footprints of mass concentration and the associated probability distributions at individual receptors are seen to provide valuable quantitative indicators of the range of expected concentrations and their associated uncertainty.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Asymmetric Uncertainty Expression for High Gradient Aerodynamics
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T
2012-01-01
When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.
Radial position-momentum uncertainties for the infinite circular well and Fisher entropy
NASA Astrophysics Data System (ADS)
Torres-Arenas, Ariadna J.; Dong, Qian; Sun, Guo-Hua; Dong, Shi-Hai
2018-07-01
We show how the product of the radial position and momentum uncertainties can be obtained analytically for the infinite circular well potential. Some interesting features are found. First, the uncertainty Δr increases with the radius R and the quantum number n, the n-th root of the Bessel function. The variation of the Δr is almost independent of the quantum number n for n > 4 and it will arrive to a constant for a large n, say n > 4. Second, we find that the relative dispersion Δr / 〈 r 〉 is independent of the radius R. Moreover, the relative dispersion increases with the quantum number n but decreases with the azimuthal quantum number m. Third, the momentum uncertainty Δp decreases with the radius R and increases with the quantum numbers m > 1 and n. Fourth, the product ΔrΔpr of the position-momentum uncertainty relations is independent of the radius R and increases with the quantum numbers m and n. Finally, we present the analytical expression for the Fisher entropy. Notice that the Fisher entropy decreases with the radius R and it increases with the quantum numbers m > 0 and n. Also, we find that the Cramer-Rao uncertainty relation is satisfied and it increases with the quantum numbers m > 0 and n, too.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer/winter sequence. The model prediction in terms of nitrogen removal, solids inventory in the bioreactors and solids retention time as a function of the solids settling behaviour is investigated. It is found that the settler behaviour, simulated by the hyperbolic model, can introduce significant errors into the approximation of the solids retention time and thus solids inventory of the system. We demonstrate that these impacts can potentially cause deterioration of the predictive power of the biokinetic model, evidenced by an evaluation of the system's nitrogen removal efficiency. The convection-dispersion model exhibits superior behaviour, and the use of this type of model thus is highly recommended, especially bearing in mind future challenges, e.g., the explicit representation of uncertainty in WWTP models.
NASA Technical Reports Server (NTRS)
Sifon, Cristobal; Battaglia, Nick; Hasselfield, Matthew; Menanteau, Felipe; Barrientos, L. Felipe; Bond, J. Richard; Crichton, Devin; Devlin, Mark J.; Dunner, Rolando; Hilton, Matt;
2016-01-01
We present galaxy velocity dispersions and dynamical mass estimates for 44 galaxy clusters selected via the Sunyaev-Zeldovich (SZ) effect by the Atacama Cosmology Telescope. Dynamical masses for 18 clusters are reported here for the first time. Using N-body simulations, we model the different observing strategies used to measure the velocity dispersions and account for systematic effects resulting from these strategies. We find that the galaxy velocity distributions may be treated as isotropic, and that an aperture correction of up to 7 per cent in the velocity dispersion is required if the spectroscopic galaxy sample is sufficiently concentrated towards the cluster centre. Accounting for the radial profile of the velocity dispersion in simulations enables consistent dynamical mass estimates regardless of the observing strategy. Cluster masses M200 are in the range (1 - 15) times 10 (sup 14) Solar Masses. Comparing with masses estimated from the SZ distortion assuming a gas pressure profile derived from X-ray observations gives a mean SZ-to-dynamical mass ratio of 1:10 plus or minus 0:13, but there is an additional 0.14 systematic uncertainty due to the unknown velocity bias; the statistical uncertainty is dominated by the scatter in the mass-velocity dispersion scaling relation. This ratio is consistent with previous determinations at these mass scales.
Constant-Elasticity-of-Substitution Simulation
NASA Technical Reports Server (NTRS)
Reiter, G.
1986-01-01
Program simulates constant elasticity-of-substitution (CES) production function. CES function used by economic analysts to examine production costs as well as uncertainties in production. User provides such input parameters as price of labor, price of capital, and dispersion levels. CES minimizes expected cost to produce capital-uncertainty pair. By varying capital-value input, one obtains series of capital-uncertainty pairs. Capital-uncertainty pairs then used to generate several cost curves. CES program menu driven and features specific print menu for examining selected output curves. Program written in BASIC for interactive execution and implemented on IBM PC-series computer.
Preliminary Performance Analyses of the Constellation Program ARES 1 Crew Launch Vehicle
NASA Technical Reports Server (NTRS)
Phillips, Mark; Hanson, John; Shmitt, Terri; Dukemand, Greg; Hays, Jim; Hill, Ashley; Garcia, Jessica
2007-01-01
By the time NASA's Exploration Systems Architecture Study (ESAS) report had been released to the public in December 2005, engineers at NASA's Marshall Space Flight Center had already initiated the first of a series of detailed design analysis cycles (DACs) for the Constellation Program Crew Launch Vehicle (CLV), which has been given the name Ares I. As a major component of the Constellation Architecture, the CLV's initial role will be to deliver crew and cargo aboard the newly conceived Crew Exploration Vehicle (CEV) to a staging orbit for eventual rendezvous with the International Space Station (ISS). However, the long-term goal and design focus of the CLV will be to provide launch services for a crewed CEV in support of lunar exploration missions. Key to the success of the CLV design effort and an integral part of each DAC is a detailed performance analysis tailored to assess nominal and dispersed performance of the vehicle, to determine performance sensitivities, and to generate design-driving dispersed trajectories. Results of these analyses provide valuable design information to the program for the current design as well as provide feedback to engineers on how to adjust the current design in order to maintain program goals. This paper presents a condensed subset of the CLV performance analyses performed during the CLV DAC-1 cycle. Deterministic studies include development of the CLV DAC-1 reference trajectories, identification of vehicle stage impact footprints, an assessment of launch window impacts to payload performance, and the computation of select CLV payload partials. Dispersion studies include definition of input uncertainties, Monte Carlo analysis of trajectory performance parameters based on input dispersions, assessment of CLV flight performance reserve (FPR), assessment of orbital insertion accuracy, and an assessment of bending load indicators due to dispersions in vehicle angle of attack and side slip angle. A short discussion of the various customers for the dispersion results, along with results and ramifications of each study, are also provided.
At the request of the US EPA Oil Program Center, ERD is developing an oil spill model that focuses on fate and transport of oil components under various response scenarios. This model includes various simulation options, including the use of chemical dispersing agents on oil sli...
USDA-ARS?s Scientific Manuscript database
The backward Lagrangian stochastic (bLS) inverse-dispersion technique has been used to measure fugitive gas emissions from livestock operations. The accuracy of the bLS technique, as indicated by the percentages of gas recovery in various tracer-release experiments, has generally been within ± 10% o...
Wavelength-resolved emission spectroscopy of the alkoxy and alkylthio radicals in a supersonic jet
NASA Technical Reports Server (NTRS)
Misra, Prabhakar; Zhu, Xinming; Hsueh, Ching-Yu; Kamal, Mohammed M.
1993-01-01
Wavelength-resolved emission spectra of methoxy (CH3O) and methylthio (CH3S) radicals have been obtained in a supersonic jet environment with a resolution of 0.3 nm by dispersing the total laser-induced fluorescence with a 0.6 m monochromator. A detailed analysis of the single vibronic level dispersed fluorescence spectra yields the following vibrational frequencies for CH3O in the X(2)E state; nu(sub 1 double prime) = 2953/cm, nu(sub 2 double prime) = 1375/cm, nu(sub 3 double prime) = 1062/cm, nu(sub 4 double prime) = 2869/cm, nu(sub 5 double prime) = 1528/cm and nu(sub 6 double prime) = 688/cm. A similar analysis of the wavelength-resolved emission spectra of CH3S provides the following ground state vibrational frequencies: nu(sub 2 double prime) = 1329/cm, nu(sub 3 double prime) = 739/cm and nu(sub 6 double prime) = 601/cm. An experimental uncertainty of 20/cm is estimated for the assigned frequencies.
SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R
2007-10-29
Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less
Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J
2013-05-01
Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.
Terminal altitude maximization for Mars entry considering uncertainties
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Zhao, Zeduan; Yu, Zhengshi; Dai, Juan
2018-04-01
Uncertainties present in the Mars atmospheric entry process may cause state deviations from the nominal designed values, which will lead to unexpected performance degradation if the trajectory is designed merely based on the deterministic dynamic model. In this paper, a linear covariance based entry trajectory optimization method is proposed considering the uncertainties presenting in the initial states and parameters. By extending the elements of the state covariance matrix as augmented states, the statistical behavior of the trajectory is captured to reformulate the performance metrics and path constraints. The optimization problem is solved by the GPOPS-II toolbox in MATLAB environment. Monte Carlo simulations are also conducted to demonstrate the capability of the proposed method. Primary trading performances between the nominal deployment altitude and its dispersion can be observed by modulating the weights on the dispersion penalty, and a compromised result referring to maximizing the 3σ lower bound of the terminal altitude is achieved. The resulting path constraints also show better satisfaction in a disturbed environment compared with the nominal situation.
Facilitating climate-change-induced range shifts across continental land-use barriers.
Robillard, Cassandra M; Coristine, Laura E; Soares, Rosana N; Kerr, Jeremy T
2015-12-01
Climate changes impose requirements for many species to shift their ranges to remain within environmentally tolerable areas, but near-continuous regions of intense human land use stretching across continental extents diminish dispersal prospects for many species. We reviewed the impact of habitat loss and fragmentation on species' abilities to track changing climates and existing plans to facilitate species dispersal in response to climate change through regions of intensive land uses, drawing on examples from North America and elsewhere. We identified an emerging analytical framework that accounts for variation in species' dispersal capacities relative to both the pace of climate change and habitat availability. Habitat loss and fragmentation hinder climate change tracking, particularly for specialists, by impeding both propagule dispersal and population growth. This framework can be used to identify prospective modern-era climatic refugia, where the pace of climate change has been slower than surrounding areas, that are defined relative to individual species' needs. The framework also underscores the importance of identifying and managing dispersal pathways or corridors through semi-continental land use barriers that can benefit many species simultaneously. These emerging strategies to facilitate range shifts must account for uncertainties around population adaptation to local environmental conditions. Accounting for uncertainties in climate change and dispersal capabilities among species and expanding biological monitoring programs within an adaptive management paradigm are vital strategies that will improve species' capacities to track rapidly shifting climatic conditions across landscapes dominated by intensive human land use. © 2015 Society for Conservation Biology.
Fast radio bursts as a cosmic probe?
NASA Astrophysics Data System (ADS)
Zhou, Bei; Li, Xiang; Wang, Tao; Fan, Yi-Zhong; Wei, Da-Ming
2014-05-01
We discuss the possibility of using fast radio bursts (FRBs)—if cosmological—as a viable cosmic probe. We find that the contribution of the host galaxies to the detected dispersion measures can be inapparent for the FRBs that are not from galaxy centers or star-forming regions. The inhomogeneity of the intergalactic medium (IGM), however, causes significant deviation of the dispersion measure from that predicted in the simplified homogeneous IGM model for an individual event. Fortunately, with sufficient FRBs along different sightlines but within a very narrow redshift interval (e.g., Δz ˜0.05), the mean obtained from averaging observed dispersion measures does not suffer such a problem and hence may be used as a cosmic probe. We show that in the optimistic case (e.g., about 20 FRBs in each Δz have been measured; the most distant FRBs were at redshift ≥3; the host galaxies and the FRB sources contribute little to the detected dispersion measures) and with all the uncertainties (i.e., the inhomogeneity of the IGM, the contribution and uncertainty of host galaxies, and the evolution and error of fIGM) considered, FRBs could help constrain the equation of state of dark energy.
Gene expression models for prediction of longitudinal dispersion coefficient in streams
NASA Astrophysics Data System (ADS)
Sattar, Ahmed M. A.; Gharabaghi, Bahram
2015-05-01
Longitudinal dispersion is the key hydrologic process that governs transport of pollutants in natural streams. It is critical for spill action centers to be able to predict the pollutant travel time and break-through curves accurately following accidental spills in urban streams. This study presents a novel gene expression model for longitudinal dispersion developed using 150 published data sets of geometric and hydraulic parameters in natural streams in the United States, Canada, Europe, and New Zealand. The training and testing of the model were accomplished using randomly-selected 67% (100 data sets) and 33% (50 data sets) of the data sets, respectively. Gene expression programming (GEP) is used to develop empirical relations between the longitudinal dispersion coefficient and various control variables, including the Froude number which reflects the effect of reach slope, aspect ratio, and the bed material roughness on the dispersion coefficient. Two GEP models have been developed, and the prediction uncertainties of the developed GEP models are quantified and compared with those of existing models, showing improved prediction accuracy in favor of GEP models. Finally, a parametric analysis is performed for further verification of the developed GEP models. The main reason for the higher accuracy of the GEP models compared to the existing regression models is that exponents of the key variables (aspect ratio and bed material roughness) are not constants but a function of the Froude number. The proposed relations are both simple and accurate and can be effectively used to predict the longitudinal dispersion coefficients in natural streams.
Using heat as a tracer to estimate spatially distributed mean residence times in the hyporheic zone
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Pohll, G. M.; Stone, M. C.; Niswonger, R. G.; McKay, W. A.
2013-12-01
Biogeochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes are in contact with riverbed sediments. In this investigation, we developed a two-dimensional longitudinal flow and solute transport model to estimate the spatial distribution of mean residence time in the hyporheic zone along a riffle-pool sequence to gain a better understanding of nitrogen reactions. A flow and transport model was developed to estimate spatially distributed mean residence times and was calibrated using observations of temperature and pressure. The approach used in this investigation accounts for the mixing of ages given advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte-Carlo analysis and the generalized likelihood uncertainty estimation method. Results of parameter estimation indicate the presence of a low-permeable zone in the riffle area that induced horizontal flow at shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. From the optimal model, mean residence times were found to be relatively long (9 - 40 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean interquartile range of 2.2- and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence. It is demonstrated that spatially variable mean residence time beneath a riffle-pool system does not conform to simple conceptual models of hyporheic flow through a riffle-pool sequence. Rather, the mixing behavior between the river and the hyporheic flow are largely controlled by layered heterogeneity and anisotropy of the subsurface.
Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.
NASA Astrophysics Data System (ADS)
Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin
1998-11-01
Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.
The Hubble Constant from Supernovae
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Macri, Lucas M.
The decades-long quest to obtain a precise and accurate measurement of the local expansion rate of the universe (the Hubble Constant or H0) has greatly benefited from the use of supernovae (SNe). Starting from humble beginnings (dispersions of ˜ 0.5 mag in the Hubble flow in the late 1960s/early 1970s), the increasingly more sophisticated understanding, classification, and analysis of these events turned type Ia SNe into the premiere choice for a secondary distance indicator by the early 1990s. While some systematic uncertainties specific to SNe and to Cepheid-based distances to the calibrating host galaxies still contribute to the H0 error budget, the major emphasis over the past two decades has been on reducing the statistical uncertainty by obtaining ever-larger samples of distances to SN hosts. Building on early efforts with the first-generation instruments on the Hubble Space Telescope, recent observations with the latest instruments on this facility have reduced the estimated total uncertainty on H0 to 2.4 % and shown a path to reach a 1 % measurement by the end of the decade, aided by Gaia and the James Webb Space Telescope.
Infrared radiation and stealth characteristics prediction for supersonic aircraft with uncertainty
NASA Astrophysics Data System (ADS)
Pan, Xiaoying; Wang, Xiaojun; Wang, Ruixing; Wang, Lei
2015-11-01
The infrared radiation (IR) intensity is generally used to embody the stealth characteristics of a supersonic aircraft, which directly affects its survivability in warfare. Under such circumstances, the research on IR signature as an important branch of stealth technology is significant to overcome this threat for survivability enhancement. Considering the existence of uncertainties in material and environment, the IR intensity is indeed a range rather than a specific value. In this paper, subjected to the properties of the IR, an analytic process containing the uncertainty propagation and the reliability evaluation is investigated when taking into account that the temperature of object, the atmospheric transmittance and the spectral emissivity of materials are all regarded as uncertain parameters. For one thing, the vertex method is used to analyze and estimate the dispersion of IR intensity; for another, the safety assessment of the stealth performance for aircraft is conducted by non-probabilistic reliability analysis. For the purpose of the comparison and verification, the Monte Carlo simulation is discussed as well. The validity, usage, and efficiency of the developed methodology are demonstrated by two application examples eventually.
Transverse Densities of Octet Baryons from Chiral Effective Field Theory
Alarcón, Jose Manuel; Hiller Blin, Astrid N.; Weiss, Christian
2017-03-24
Transverse densities describe the distribution of charge and current at fixed light-front time and provide a frame-independent spatial representation of hadrons as relativistic systems. In this paper, we calculate the transverse densities of the octet baryons at peripheral distances b=O(M π -1) in an approach that combines chiral effective field theory (χχEFT) and dispersion analysis. The densities are represented as dispersive integrals of the imaginary parts of the baryon electromagnetic form factors in the timelike region (spectral functions). The spectral functions on the two-pion cut at t>4Mmore » $$2\\atop{π}$$ are computed using relativistic χEFT with octet and decuplet baryons in the extended on-mass-shell renormalization scheme. The calculations are extended into the ρ-meson mass region using a dispersive method that incorporates the timelike pion form-factor data. The approach allows us to construct densities at distances b>1 fm with controlled uncertainties. Finally, our results provide insight into the peripheral structure of nucleons and hyperons and can be compared with empirical densities and lattice-QCD calculations.« less
ANDROMEDA DWARFS IN LIGHT OF MODIFIED NEWTONIAN DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGaugh, Stacy; Milgrom, Mordehai
We compare the recently published velocity dispersions for 17 Andromeda dwarf spheroidals with estimates of the modified Newtonian dynamics predictions, based on the luminosities of these dwarfs, with reasonable stellar mass-to-light values and no dark matter. We find that the two are consistent within the uncertainties. We further predict the velocity dispersions of another 10 dwarfs for which only photometric data are currently available.
Brian K. Hand; Samuel A. Cushman; Erin L. Landguth; John Lucotch
2014-01-01
Quantifying the effects of landscape change on population connectivity is compounded by uncertainties about population size and distribution and a limited understanding of dispersal ability for most species. In addition, the effects of anthropogenic landscape change and sensitivity to regional climatic conditions interact to strongly affect habitat...
NASA Astrophysics Data System (ADS)
Poppeliers, C.; Preston, L. A.
2017-12-01
Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χ r 2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d 10 ) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
NASA Technical Reports Server (NTRS)
Manvi, R.; Fujita, T.
1978-01-01
A preliminary comparative evaluation of dispersed solar thermal power plants utilizing advanced technologies available in 1985-2000 time frame is under way at JPL. The solar power plants of 50 KWe to 10 MWe size are equipped with two axis tracking parabolic dish concentrator systems operating at temperatures in excess of 1000 F. The energy conversion schemes under consideration include advanced steam, open and closed cycle gas turbines, stirling, and combined cycle. The energy storage systems include advanced batteries, liquid metal, and chemical. This paper outlines a simple methodology for a probabilistic assessment of such systems. Sources of uncertainty in the development of advanced systems are identified, and a computer Monte Carlo simulation is exercised to permit an analysis of the tradeoffs of the risk of failure versus the potential for large gains. Frequency distribution of energy cost for several alternatives are presented.
Regional crop yield forecasting: a probabilistic approach
NASA Astrophysics Data System (ADS)
de Wit, A.; van Diepen, K.; Boogaard, H.
2009-04-01
Information on the outlook on yield and production of crops over large regions is essential for government services dealing with import and export of food crops, for agencies with a role in food relief, for international organizations with a mandate in monitoring the world food production and trade, and for commodity traders. Process-based mechanistic crop models are an important tool for providing such information, because they can integrate the effect of crop management, weather and soil on crop growth. When properly integrated in a yield forecasting system, the aggregated model output can be used to predict crop yield and production at regional, national and continental scales. Nevertheless, given the scales at which these models operate, the results are subject to large uncertainties due to poorly known weather conditions and crop management. Current yield forecasting systems are generally deterministic in nature and provide no information about the uncertainty bounds on their output. To improve on this situation we present an ensemble-based approach where uncertainty bounds can be derived from the dispersion of results in the ensemble. The probabilistic information provided by this ensemble-based system can be used to quantify uncertainties (risk) on regional crop yield forecasts and can therefore be an important support to quantitative risk analysis in a decision making process.
Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.
Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam
2010-11-15
Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.
Kovalets, Ivan V; Asker, Christian; Khalchenkov, Alexander V; Persson, Christer; Lavrova, Tatyana V
2017-06-01
Simulations of atmospheric dispersion of radon around the uranium mill tailings of the former Pridneprovsky Chemical Plant (PChP) in Ukraine were carried out with the aid of two atmospheric dispersion models: the Airviro Grid Model and the CALMET/CALPUFF model chain. The available measurement data of radon emission rates taken in the territories and the close vicinity of tailings were used in simulations. The results of simulations were compared to the yearly averaged measurements of concentration data. Both models were able to reasonably reproduce average radon concentration at the Sukhachivske site using averaged measured emission rates as input together with the measured meteorological data. At the same time, both models significantly underestimated concentrations as compared to measurements collected at the PChP industrial site. According to the results of both dispersion models, it was shown that only addition of significant radon emission rate from the whole territory of PChP in addition to emission rates from the tailings could explain the observed concentration measurements. With the aid of the uncertainty analysis, the radon emission rate from the whole territory of PChP was estimated to be between 1.5 and 3.5 Bq·m -2 s -1 . Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Scollo, Simona; Costa, Antonio; Brancato, Alfonso; Prestifilippo, Michele
2015-04-01
Tephra dispersal, even in small amounts, may heavily affect public health and critical infrastructures, such as airports, train and road networks, and electric power supply systems. Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at managing and mitigating the risk posed by activity during volcanic crises and during eruptions. Short-term PVHA (over time intervals in the order of hours to few days) must account for rapidly changing information coming from the monitoring system, as well as, updated wind forecast, and they must be accomplished in near-real-time. In addition, while during unrest the primary goal is to forecast potential eruptions, during eruptions it is also fundamental to correctly account for the real-time status of the eruption and of tephra dispersal, as well as its potential evolution in the short-term. Here, we present a preliminary application of BET_VHst model (Selva et al. 2014) for Mt. Etna. The model has its roots into present state deterministic procedure, and it deals with the large uncertainty that such procedures typically ignore, like uncertainty on the potential position of the vent and eruptive size, on the possible evolution of volcanological input during ongoing eruptions, as well as, on wind field. Uncertainty is treated by making use of Bayesian inference, alternative modeling procedures for tephra dispersal, and statistical mixing of long- and short-term analyses. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Urban water quality management often requires use of numerical models allowing the evaluation of the cause-effect relationship between the input(s) (i.e. rainfall, pollutant concentrations on catchment surface and in sewer system) and the resulting water quality response. The conventional approach to the system (i.e. sewer system, wastewater treatment plant and receiving water body), considering each component separately, does not enable optimisation of the whole system. However, recent gains in understanding and modelling make it possible to represent the system as a whole and optimise its overall performance. Indeed, integrated urban drainage modelling is of growing interest for tools to cope with Water Framework Directive requirements. Two different approaches can be employed for modelling the whole urban drainage system: detailed and simplified. Each has its advantages and disadvantages. Specifically, detailed approaches can offer a higher level of reliability in the model results, but can be very time consuming from the computational point of view. Simplified approaches are faster but may lead to greater model uncertainty due to an over-simplification. To gain insight into the above problem, two different modelling approaches have been compared with respect to their uncertainty. The first urban drainage integrated model approach uses the Saint-Venant equations and the 1D advection-dispersion equations, for the quantity and for the quality aspects, respectively. The second model approach consists of the simplified reservoir model. The analysis used a parsimonious bespoke model developed in previous studies. For the uncertainty analysis, the Generalised Likelihood Uncertainty Estimation (GLUE) procedure was used. Model reliability was evaluated on the basis of capacity of globally limiting the uncertainty. Both models have a good capability to fit the experimental data, suggesting that all adopted approaches are equivalent both for quantity and quality. The detailed model approach is more robust and presents less uncertainty in terms of uncertainty bands. On the other hand, the simplified river water quality model approach shows higher uncertainty and may be unsuitable for receiving water body quality assessment.
On the generation of climate model ensembles
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.
2014-10-01
Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.
NASA Astrophysics Data System (ADS)
Olugboji, T. M.; Lekic, V.; McDonough, W.
2017-07-01
We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.
Zollanvari, Amin; Dougherty, Edward R
2016-12-01
In classification, prior knowledge is incorporated in a Bayesian framework by assuming that the feature-label distribution belongs to an uncertainty class of feature-label distributions governed by a prior distribution. A posterior distribution is then derived from the prior and the sample data. An optimal Bayesian classifier (OBC) minimizes the expected misclassification error relative to the posterior distribution. From an application perspective, prior construction is critical. The prior distribution is formed by mapping a set of mathematical relations among the features and labels, the prior knowledge, into a distribution governing the probability mass across the uncertainty class. In this paper, we consider prior knowledge in the form of stochastic differential equations (SDEs). We consider a vector SDE in integral form involving a drift vector and dispersion matrix. Having constructed the prior, we develop the optimal Bayesian classifier between two models and examine, via synthetic experiments, the effects of uncertainty in the drift vector and dispersion matrix. We apply the theory to a set of SDEs for the purpose of differentiating the evolutionary history between two species.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
Black hole complementarity with the generalized uncertainty principle in Gravity's Rainbow
NASA Astrophysics Data System (ADS)
Gim, Yongwan; Um, Hwajin; Kim, Wontae
2018-02-01
When gravitation is combined with quantum theory, the Heisenberg uncertainty principle could be extended to the generalized uncertainty principle accompanying a minimal length. To see how the generalized uncertainty principle works in the context of black hole complementarity, we calculate the required energy to duplicate information for the Schwarzschild black hole. It shows that the duplication of information is not allowed and black hole complementarity is still valid even assuming the generalized uncertainty principle. On the other hand, the generalized uncertainty principle with the minimal length could lead to a modification of the conventional dispersion relation in light of Gravity's Rainbow, where the minimal length is also invariant as well as the speed of light. Revisiting the gedanken experiment, we show that the no-cloning theorem for black hole complementarity can be made valid in the regime of Gravity's Rainbow on a certain combination of parameters.
Evaluation of the ERP dispersion model using Darlington tracer-study data. Report No. 90-200-K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S.C.
1990-01-01
In this study, site-boundary atmospheric dilution factors calculated by the atmospheric dispersion model used in the ERP (Emergency Response Planning) computer code were compared to data collected during the Darlington tracer study. The purpose of this comparison was to obtain estimates of model uncertainty under a variety of conditions. This report provides background on ERP, the ERP dispersion model and the Darlington tracer study. Model evaluation techniques are discussed briefly, and the results of the comparison of model calculations with the field data are presented and reviewed.
Accounting for uncertainty in marine reserve design.
Halpern, Benjamin S; Regan, Helen M; Possingham, Hugh P; McCarthy, Michael A
2006-01-01
Ecosystems and the species and communities within them are highly complex systems that defy predictions with any degree of certainty. Managing and conserving these systems in the face of uncertainty remains a daunting challenge, particularly with respect to developing networks of marine reserves. Here we review several modelling frameworks that explicitly acknowledge and incorporate uncertainty, and then use these methods to evaluate reserve spacing rules given increasing levels of uncertainty about larval dispersal distances. Our approach finds similar spacing rules as have been proposed elsewhere - roughly 20-200 km - but highlights several advantages provided by uncertainty modelling over more traditional approaches to developing these estimates. In particular, we argue that uncertainty modelling can allow for (1) an evaluation of the risk associated with any decision based on the assumed uncertainty; (2) a method for quantifying the costs and benefits of reducing uncertainty; and (3) a useful tool for communicating to stakeholders the challenges in managing highly uncertain systems. We also argue that incorporating rather than avoiding uncertainty will increase the chances of successfully achieving conservation and management goals.
NASA Astrophysics Data System (ADS)
Foster, Jonathan B.; Cottaar, Michiel; Covey, Kevin R.; Arce, Héctor G.; Meyer, Michael R.; Nidever, David L.; Stassun, Keivan G.; Tan, Jonathan C.; Chojnowski, S. Drew; da Rio, Nicola; Flaherty, Kevin M.; Rebull, Luisa; Frinchaboy, Peter M.; Majewski, Steven R.; Skrutskie, Michael; Wilson, John C.; Zasowski, Gail
2015-02-01
The initial velocity dispersion of newborn stars is a major unconstrained aspect of star formation theory. Using near-infrared spectra obtained with the APOGEE spectrograph, we show that the velocity dispersion of young (1-2 Myr) stars in NGC 1333 is 0.92 ± 0.12 km s-1 after correcting for measurement uncertainties and the effect of binaries. This velocity dispersion is consistent with the virial velocity of the region and the diffuse gas velocity dispersion, but significantly larger than the velocity dispersion of the dense, star-forming cores, which have a subvirial velocity dispersion of 0.5 km s-1. Since the NGC 1333 cluster is dynamically young and deeply embedded, this measurement provides a strong constraint on the initial velocity dispersion of newly formed stars. We propose that the difference in velocity dispersion between stars and dense cores may be due to the influence of a 70 μG magnetic field acting on the dense cores or be the signature of a cluster with initial substructure undergoing global collapse.
NASA Astrophysics Data System (ADS)
Abesamis, Rene A.; Saenz-Agudelo, Pablo; Berumen, Michael L.; Bode, Michael; Jadloc, Claro Renato L.; Solera, Leilani A.; Villanoy, Cesar L.; Bernardo, Lawrence Patrick C.; Alcala, Angel C.; Russ, Garry R.
2017-09-01
Networks of no-take marine reserves (NTMRs) are a widely advocated strategy for managing coral reefs. However, uncertainty about the strength of population connectivity between individual reefs and NTMRs through larval dispersal remains a major obstacle to effective network design. In this study, larval dispersal among NTMRs and fishing grounds in the Philippines was inferred by conducting genetic parentage analysis on a coral-reef fish ( Chaetodon vagabundus). Adult and juvenile fish were sampled intensively in an area encompassing approximately 90 km of coastline. Thirty-seven true parent-offspring pairs were accepted after screening 1978 juveniles against 1387 adults. The data showed all types of dispersal connections that may occur in NTMR networks, with assignments suggesting connectivity among NTMRs and fishing grounds ( n = 35) far outnumbering those indicating self-recruitment ( n = 2). Critically, half (51%) of the inferred occurrences of larval dispersal linked reefs managed by separate, independent municipalities and constituent villages, emphasising the need for nested collaborative management arrangements across management units to sustain NTMR networks. Larval dispersal appeared to be influenced by wind-driven seasonal reversals in the direction of surface currents. The best-fit larval dispersal kernel estimated from the parentage data predicted that 50% of larvae originating from a population would attempt to settle within 33 km, and 95% within 83 km. Mean larval dispersal distance was estimated to be 36.5 km. These results suggest that creating a network of closely spaced (less than a few tens of km apart) NTMRs can enhance recruitment for protected and fished populations throughout the NTMR network. The findings underscore major challenges for regional coral-reef management initiatives that must be addressed with priority: (1) strengthening management of NTMR networks across political or customary boundaries; and (2) achieving adequate population connectivity via larval dispersal to sustain reef-fish populations within these networks.
Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi
2017-02-20
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.
Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi
2017-01-01
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753
Uncertainty and dispersion in air parcel trajectories near the tropical tropopause
NASA Astrophysics Data System (ADS)
Bergman, John; Jensen, Eric; Pfister, Leonhard; Bui, Thoapaul
2016-04-01
The Tropical Tropopause Layer (TTL) is important as the gateway to the stratosphere for chemical constituents produced at the Earth's surface. As such, understanding the processes that transport air through the upper tropical troposphere is important for a number of current scientific issues such as the impact of stratospheric water vapor on the global radiative budget and the depletion of ozone by both anthropogenically- and naturally-produced halocarbons. Compared to the lower troposphere, transport in the TTL is relatively unaffected by turbulent motion. Consequently, Lagrangian particle models are thought to provide reasonable estimates of parcel pathways through the TTL. However, there are complications that make trajectory analyses difficult to interpret; uncertainty in the wind data used to drive these calculations and trajectory dispersion being among the most important. These issues are examined using ensembles of backward air parcel trajectories that are initially tightly grouped near the tropical tropopause using three approaches: A Monte Carlo ensemble, in which different members use identical resolved wind fluctuations but different realizations of stochastic, multi-fractal simulations of unresolved winds, perturbed initial location ensembles, in which members use identical resolved wind fields but initial locations are displaced 2° in latitude and longitude, and a multi-model ensemble that uses identical initial conditions but different resolved wind fields and/or trajectory formulations. Comparisons among the approaches distinguish, to some degree, physical dispersion from that due to data uncertainty and the impact of unresolved wind fluctuations from that of resolved variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Michael K.; O'Rourke, Patrick E.
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Detecting aircraft with a low-resolution infrared sensor.
Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric
2012-06-01
Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2017-12-01
Seismic imaging utilizing complementary seismic data provides unique insight on the formation, evolution and current structure of continental lithosphere. While numerous efforts have improved the resolution of seismic structure, the quantification of uncertainties remains challenging due to the non-linearity and the non-uniqueness of geophysical inverse problem. In this project, we use a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate seismic observables including Rayleigh and Love wave dispersion, Ps and Sp receiver function to invert for shear velocity (Vs), compressional velocity (Vp), density, and radial anisotropy of the lithospheric structure. The Bayesian nature and the transdimensionality of this approach allow the quantification of the model parameter uncertainties while keeping the models parsimonious. Both synthetic test and inversion of actual data for Ps and Sp receiver functions are performed. We quantify the information gained in different inversions by calculating the Kullback-Leibler divergence. Furthermore, we explore the ability of Rayleigh and Love wave dispersion data to constrain radial anisotropy. We show that when multiple types of model parameters (Vsv, Vsh, and Vp) are inverted simultaneously, the constraints on radial anisotropy are limited by relatively large data uncertainties and trade-off strongly with Vp. We then perform joint inversion of the surface wave dispersion (SWD) and Ps, Sp receiver functions, and show that the constraints on both isotropic Vs and radial anisotropy are significantly improved. To achieve faster convergence of the rjMcMC, we propose a progressive inclusion scheme, and invert SWD measurements and receiver functions from about 400 USArray stations in the Northern Great Plains. We start by only using SWD data due to its fast convergence rate. We then use the average of the ensemble as a starting model for the joint inversion, which is able to resolve distinct seismic signatures of geological structures including the trans-Hudson orogen, Wyoming craton and Yellowstone hotspot. Various analyses are done to access the uncertainties of the seismic velocities and Moho depths. We also address the importance of careful data processing of receiver functions by illustrating artifacts due to unmodelled sediment reverberations.
A high stellar velocity dispersion for a compact massive galaxy at redshift z = 2.186.
van Dokkum, Pieter G; Kriek, Mariska; Franx, Marijn
2009-08-06
Recent studies have found that the oldest and most luminous galaxies in the early Universe are surprisingly compact, having stellar masses similar to present-day elliptical galaxies but much smaller sizes. This finding has attracted considerable attention, as it suggests that massive galaxies have grown in size by a factor of about five over the past ten billion years (10 Gyr). A key test of these results is a determination of the stellar kinematics of one of the compact galaxies: if the sizes of these objects are as extreme as has been claimed, their stars are expected to have much higher velocities than those in present-day galaxies of the same mass. Here we report a measurement of the stellar velocity dispersion of a massive compact galaxy at redshift z = 2.186, corresponding to a look-back time of 10.7 Gyr. The velocity dispersion is very high at km s(-1), consistent with the mass and compactness of the galaxy inferred from photometric data. This would indicate significant recent structural and dynamical evolution of massive galaxies over the past 10 Gyr. The uncertainty in the dispersion was determined from simulations that include the effects of noise and template mismatch. However, we cannot exclude the possibility that some subtle systematic effect may have influenced the analysis, given the low signal-to-noise ratio of our spectrum.
Sensitivity tests and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Costa, Antonio; De Natale, Giuseppe; Di Vito, Mauro; Isaia, Roberto; Macedonio, Giovanni
2017-04-01
We present the results of a statistical study on tephra dispersion in the case of reactivation of the Campi Flegrei volcano. We considered the full spectrum of possible eruptions, in terms of size and position of eruptive vents. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Of those only three are explosive (small, medium, and large) and can produce a significant quantity of volcanic ash. Hazard assessments are made through dispersion simulations of ash and lapilli, considering the full variability of winds, eruptive vents, and eruptive sizes. The results are presented in form of four families of hazard curves conditioned to the occurrence of an eruption: 1) small eruptive size from any vent; 2) medium eruptive size from any vent; 3) large eruptive size from any vent; 4) any size from any vent. The epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena) on the estimation of hazard curves was quantified making use of alternative scientifically acceptable approaches. The choice of such alternative models is made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of the possible opening of eruptive vents, tephra total grain-size distributions (TGSD), relative mass of fine particles, and the effect of aggregation. The results of this sensitivity analyses show that the dominant uncertainty is related to the choice of TGSD, mass of fine ash, and potential effects of ash aggregation. The latter is particularly relevant in case of magma-water interaction during an eruptive phase, when most of the fine ash can form accretionary lapilli that could contribute significantly in increasing the tephra load in the proximal region. Relatively insignificant is the variability induced by the use of different weather databases. The hazard curves, together with the quantification of epistemic uncertainty, were finally calculated through a statistical model based on ensemble mixing of selected alternative models, e.g. different choices on the estimate of the total erupted mass, mass of fine ash, effects of aggregation, etc. Hazard and probability maps were produced at different confidence levels compared to the epistemic uncertainty (mean, median, 16th percentile, and 84th percentile).
Numerical Simulation and Quantitative Uncertainty Assessment of Microchannel Flow
NASA Astrophysics Data System (ADS)
Debusschere, Bert; Najm, Habib; Knio, Omar; Matta, Alain; Ghanem, Roger; Le Maitre, Olivier
2002-11-01
This study investigates the effect of uncertainty in physical model parameters on computed electrokinetic flow of proteins in a microchannel with a potassium phosphate buffer. The coupled momentum, species transport, and electrostatic field equations give a detailed representation of electroosmotic and pressure-driven flow, including sample dispersion mechanisms. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. To quantify uncertainty, the governing equations are reformulated using a pseudo-spectral stochastic methodology, which uses polynomial chaos expansions to describe uncertain/stochastic model parameters, boundary conditions, and flow quantities. Integration of the resulting equations for the spectral mode strengths gives the evolution of all stochastic modes for all variables. Results show the spatiotemporal evolution of uncertainties in predicted quantities and highlight the dominant parameters contributing to these uncertainties during various flow phases. This work is supported by DARPA.
NASA Astrophysics Data System (ADS)
Armand, P.; Brocheton, F.; Poulet, D.; Vendel, F.; Dubourg, V.; Yalamas, T.
2014-10-01
This paper is an original contribution to uncertainty quantification in atmospheric transport & dispersion (AT&D) at the local scale (1-10 km). It is proposed to account for the imprecise knowledge of the meteorological and release conditions in the case of an accidental hazardous atmospheric emission. The aim is to produce probabilistic risk maps instead of a deterministic toxic load map in order to help the stakeholders making their decisions. Due to the urge attached to such situations, the proposed methodology is able to produce such maps in a limited amount of time. It resorts to a Lagrangian particle dispersion model (LPDM) using wind fields interpolated from a pre-established database that collects the results from a computational fluid dynamics (CFD) model. This enables a decoupling of the CFD simulations from the dispersion analysis, thus a considerable saving of computational time. In order to make the Monte-Carlo-sampling-based estimation of the probability field even faster, it is also proposed to recourse to the use of a vector Gaussian process surrogate model together with high performance computing (HPC) resources. The Gaussian process (GP) surrogate modelling technique is coupled with a probabilistic principal component analysis (PCA) for reducing the number of GP predictors to fit, store and predict. The design of experiments (DOE) from which the surrogate model is built, is run over a cluster of PCs for making the total production time as short as possible. The use of GP predictors is validated by comparing the results produced by this technique with those obtained by crude Monte Carlo sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
NASA Astrophysics Data System (ADS)
Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.
2016-07-01
The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.
NASA Astrophysics Data System (ADS)
Łuszczak, Katarzyna; Persano, Cristina; Stuart, Finlay; Brown, Roderick
2016-04-01
Apatite (U-Th-Sm)/He (AHe) thermochronometry is a powerful technique for deciphering denudation of the uppermost crust. However, the age dispersion of single grains from the same rock is typical, and this hampers establishing accurate thermal histories when low grain numbers are analysed. Dispersion arising from the analysis of broken crystal fragments[1] has been proposed as an important cause of age dispersion, along with grain size and radiation damage. A new tool, Helfrag[2], allows constraints to be placed on the low temperature history derived from the analysis of apatite crystal fragments. However, the age dispersion model has not been fully tested on natural samples yet. We have performed AHe analysis of multiple (n = 20-25) grains from four rock samples from the Scottish Southern Uplands, which were subjected to the same exhumation episodes, although, the amount of exhumation varied between the localities. This is evident from the range of AFT ages (˜60 to ˜200 Ma) and variable thermal histories showing either strong, moderate and no support for a rapid cooling event at ˜60 Ma. Different apatite size and fragment geometry were analysed in order to maximise age dispersion. In general, the age dispersion increases with increasing AFT age (from 47% to 127%), consistent with the prediction from the fragmentation model. Thermal histories obtained using Helfrag were compared with those obtained by standard codes based on the spherical approximation. In one case, the Helfrag model was capable of resolving the higher complexity of the thermal history of the rock, constraining several heating/cooling events that are not predicted by the standard models, but are in good agreement with the regional geology. In other cases, the thermal histories are similar for both Helfrag and standard models and the age predictions for the Helfrag are only slightly better than for standard model, implying that the grain size has the dominant role in generating the age dispersion. Rather than suggesting that grain size is the predominant factor in controlling age dispersion in all data sets, our results may be linked to the actual size of the picked grains; for grain widths smaller than 100 μm, the He profile within the crystal may not be differentiated enough to produce a dispersion measureable outside the uncertainty associated with the age. It is also easier for long-thin and short-thick than long-thick and short-thin grains to be preserved; this minimises the age dispersion that can be generated from fragmentation. We suggest, that in order to obtain valuable information from both fragmentation and grain size >20 large (width >100 μm) grain fragments of variable length have to be analyzed, together with a few smaller grains. Our results point to a strategy that favours multiple single-grain AHe ages determinations on carefully selected samples, with good quality apatite crystals of variable dimensions rather than fewer determinations on many samples. [1] Brown, R. et al. 2013.Geochim. Cosmochim. Acta.122, 478-497 [2] Beucher, R. et al. 2013.Geochim. Cosmochim. Acta. 120, 395-416.
Discrimination of sweeteners based on the refractometric analysis
NASA Astrophysics Data System (ADS)
Bodurov, I.; Vlaeva, I.; Viraneva, A.; Yovcheva, T.
2017-01-01
In the present work, the refractive characteristics of aqueous solutions of several sweeteners are investigated. These data in combination with ones from other sensors should find application for brief determination of sweeteners content in food and dynamic monitoring of food quality. The refractive indices of pure (distilled) water and aqueous solutions of several commonly used natural and artificial sweeteners (glucose, fructose, sucrose, lactose, sorbitol [E420], isomalt [E953], saccharin sodium [E950], cyclamate sodium and glycerol [E422]) with 10 wt.% concentration are accurately measured at 405 nm, 532 nm and 632.8 nm wavelengths. The measurements are carried out using three wavelength laser microrefractometer based on the total internal reflection method. The critical angle is determined by the disappearance of the diffraction orders from a metal grating. The experimental uncertainty is less than ±0.0001. The dispersion dependences of the refractive indices are obtained using the one-term Sellmeier model. Based on the obtained experimental data additional refractive and dispersion characteristics are calculated.
A robust method to forecast volcanic ash clouds
Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin
2012-01-01
Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.
NASA Astrophysics Data System (ADS)
Dacre, H.; Prata, A.; Shine, K. P.; Irvine, E.
2017-12-01
The volcanic ash clouds produced by Icelandic volcano Eyjafjallajökull in April/May 2010 resulted in `no fly zones' which paralysed European aircraft activity and cost the airline industry an estimated £1.1 billion. In response to the crisis, the Civil Aviation Authority (CAA), in collaboration with Rolls Royce, produced the `safe-to-fly' chart. As ash concentrations are the primary output of dispersion model forecasts, the chart was designed to illustrate how engine damage progresses as a function of ash concentration. Concentration thresholds were subsequently derived based on previous ash encounters. Research scientists and aircraft manufactures have since recognised the importance of volcanic ash dosages; the accumulated concentration over time. Dosages are an improvement to concentrations as they can be used to identify pernicious situations where ash concentrations are acceptably low but the exposure time is long enough to cause damage to aircraft engines. Here we present a proof-of-concept volcanic ash dosage calculator; an innovative, web-based research tool, developed in close collaboration with operators and regulators, which utilises interactive data visualisation to communicate the uncertainty inherent in dispersion model simulations and subsequent dosage calculations. To calculate dosages, we use NAME (Numerical Atmospheric-dispersion Modelling Environment) to simulate several Icelandic eruption scenarios, which result in tephra dispersal across the North Atlantic, UK and Europe. Ash encounters are simulated based on flight-optimal routes derived from aircraft routing software. Key outputs of the calculator include: the along-flight dosage, exposure time and peak concentration. The design of the tool allows users to explore the key areas of uncertainty in the dosage calculation and to visualise how this changes as the planned flight path is varied. We expect that this research will result in better informed decisions from key stakeholders during volcanic ash events through a deeper understanding of the associated uncertainties in dosage calculations.
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.
2017-12-01
The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We also demonstrate that differences between the modeled and observed meteorology can be used to predict uncertainties associated with atmospheric transport and dispersion modeling which can help improve the skill of an inversion at urban scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krakowiak, Konrad J.; Wilson, William; James, Simon
2015-01-15
A novel approach for the chemo-mechanical characterization of cement-based materials is presented, which combines the classical grid indentation technique with elemental mapping by scanning electron microscopy-energy dispersive X-ray spectrometry (SEM-EDS). It is illustrated through application to an oil-well cement system with siliceous filler. The characteristic X-rays of major elements (silicon, calcium and aluminum) are measured over the indentation region and mapped back on the indentation points. Measured intensities together with indentation hardness and modulus are considered in a clustering analysis within the framework of Finite Mixture Models with Gaussian component density function. The method is able to successfully isolate themore » calcium-silica-hydrate gel at the indentation scale from its mixtures with other products of cement hydration and anhydrous phases; thus providing a convenient means to link mechanical response to the calcium-to-silicon ratio quantified independently via X-ray wavelength dispersive spectroscopy. A discussion of uncertainty quantification of the estimated chemo-mechanical properties and phase volume fractions, as well as the effect of chemical observables on phase assessment is also included.« less
An uncertainty analysis of air pollution externalities from road transport in Belgium in 2010.
Int Panis, L; De Nocker, L; Cornelis, E; Torfs, R
2004-12-01
Although stricter standards for vehicles will reduce emissions to air significantly by 2010, a number of problems will remain, especially related to particulate concentrations in cities, ground-level ozone, and CO(2). To evaluate the impacts of new policy measures, tools need to be available that assess the potential benefits of these measures in terms of the vehicle fleet, fuel choice, modal choice, kilometers driven, emissions, and the impacts on public health and related external costs. The ExternE accounting framework offers the most up to date and comprehensive methodology to assess marginal external costs of energy-related pollutants. It combines emission models, air dispersion models at local and regional scales with dose-response functions and valuation rules. Vito has extended this accounting framework with data and models related to the future composition of the vehicle fleet and transportation demand to evaluate the impact of new policy proposals on air quality and aggregated (total) external costs by 2010. Special attention was given to uncertainty analysis. The uncertainty for more than 100 different parameters was combined in Monte Carlo simulations to assess the range of possible outcomes and the main drivers of these results. Although the impacts from emission standards and total fleet mileage look dominant at first, a number of other factors were found to be important as well. This includes the number of diesel vehicles, inspection and maintenance (high-emitter cars), use of air conditioning, and heavy duty transit traffic.
Experiments on Nucleation in Different Flow Regimes
NASA Technical Reports Server (NTRS)
Bayuzick, R. J.; Hofmeister, W. H.; Morton, C. M.; Robinson, M. B.
1999-01-01
The vast majority of metallic engineering materials are solidified from the liquid phase. Understanding the solidification process is essential to control microstructure, which in turn, determines the properties of materials. The genesis of solidification is nucleation, where the first stable solid forms from the liquid phase. Nucleation kinetics determine the degree of undercooling and phase selection. As such, it is important to understand nucleation phenomena in order to control solidification or glass formation in metals and alloys. Early experiments in nucleation kinetics were accomplished by droplet dispersion methods. Dilatometry was used by Turnbull and others, and more recently differential thermal analysis and differential scanning calorimetry have been used for kinetic studies. These techniques have enjoyed success; however, there are difficulties with these experiments. Since materials are dispersed in a medium, the character of the emulsion/metal interface affects the nucleation behavior. Statistics are derived from the large number of particles observed in a single experiment, but dispersions have a finite size distribution which adds to the uncertainty of the kinetic determinations. Even though temperature can be controlled quite well before the onset of nucleation, the release of the latent heat of fusion during nucleation of particles complicates the assumption of isothermality during these experiments. Containerless processing has enabled another approach to the study of nucleation kinetics. With levitation techniques it is possible to undercool one sample to nucleation repeatedly in a controlled manner, such that the statistics of the nucleation process can be derived from multiple experiments on a single sample. The authors have fully developed the analysis of nucleation experiments on single samples following the suggestions of Skripov. The advantage of these experiments is that the samples are directly observable. The nucleation temperature can be measured by noncontact optical pyrometry, the mass of the sample is known, and post processing analysis can be conducted on the sample. The disadvantages are that temperature measurement must have exceptionally high precision, and it is not possible to isolate specific heterogeneous sites as in droplet dispersions.
The development of a 3D risk analysis method.
I, Yet-Pole; Cheng, Te-Lung
2008-05-01
Much attention has been paid to the quantitative risk analysis (QRA) research in recent years due to more and more severe disasters that have happened in the process industries. Owing to its calculation complexity, very few software, such as SAFETI, can really make the risk presentation meet the practice requirements. However, the traditional risk presentation method, like the individual risk contour in SAFETI, is mainly based on the consequence analysis results of dispersion modeling, which usually assumes that the vapor cloud disperses over a constant ground roughness on a flat terrain with no obstructions and concentration fluctuations, which is quite different from the real situations of a chemical process plant. All these models usually over-predict the hazardous regions in order to maintain their conservativeness, which also increases the uncertainty of the simulation results. On the other hand, a more rigorous model such as the computational fluid dynamics (CFD) model can resolve the previous limitations; however, it cannot resolve the complexity of risk calculations. In this research, a conceptual three-dimensional (3D) risk calculation method was proposed via the combination of results of a series of CFD simulations with some post-processing procedures to obtain the 3D individual risk iso-surfaces. It is believed that such technique will not only be limited to risk analysis at ground level, but also be extended into aerial, submarine, or space risk analyses in the near future.
Bonsall, Michael B; Dooley, Claire A; Kasparson, Anna; Brereton, Tom; Roy, David B; Thomas, Jeremy A
2014-01-01
Conservation of endangered species necessitates a full appreciation of the ecological processes affecting the regulation, limitation, and persistence of populations. These processes are influenced by birth, death, and dispersal events, and characterizing them requires careful accounting of both the deterministic and stochastic processes operating at both local and regional population levels. We combined ecological theory and observations on Allee effects by linking mathematical analysis and the spatial and temporal population dynamics patterns of a highly endangered butterfly, the high brown fritillary, Argynnis adippe. Our theoretical analysis showed that the role of density-dependent feedbacks in the presence of local immigration can influence the strength of Allee effects. Linking this theory to the analysis of the population data revealed strong evidence for both negative density dependence and Allee effects at the landscape or regional scale. These regional dynamics are predicted to be highly influenced by immigration. Using a Bayesian state-space approach, we characterized the local-scale births, deaths, and dispersal effects together with measurement and process uncertainty in the metapopulation. Some form of an Allee effect influenced almost three-quarters of these local populations. Our joint analysis of the deterministic and stochastic dynamics suggests that a conservation priority for this species would be to increase resource availability in currently occupied and, more importantly, in unoccupied sites.
NASA Astrophysics Data System (ADS)
Song, Young-Joo; Bae, Jonghee; Kim, Young-Rok; Kim, Bang-Yeop
2016-12-01
In this study, the uncertainty requirements for orbit, attitude, and burn performance were estimated and analyzed for the execution of the 1st lunar orbit insertion (LOI) maneuver of the Korea Pathfinder Lunar Orbiter (KPLO) mission. During the early design phase of the system, associate analysis is an essential design factor as the 1st LOI maneuver is the largest burn that utilizes the onboard propulsion system; the success of the lunar capture is directly affected by the performance achieved. For the analysis, the spacecraft is assumed to have already approached the periselene with a hyperbolic arrival trajectory around the moon. In addition, diverse arrival conditions and mission constraints were considered, such as varying periselene approach velocity, altitude, and orbital period of the capture orbit after execution of the 1st LOI maneuver. The current analysis assumed an impulsive LOI maneuver, and two-body equations of motion were adapted to simplify the problem for a preliminary analysis. Monte Carlo simulations were performed for the statistical analysis to analyze diverse uncertainties that might arise at the moment when the maneuver is executed. As a result, three major requirements were analyzed and estimated for the early design phase. First, the minimum requirements were estimated for the burn performance to be captured around the moon. Second, the requirements for orbit, attitude, and maneuver burn performances were simultaneously estimated and analyzed to maintain the 1st elliptical orbit achieved around the moon within the specified orbital period. Finally, the dispersion requirements on the B-plane aiming at target points to meet the target insertion goal were analyzed and can be utilized as reference target guidelines for a mid-course correction (MCC) maneuver during the transfer. More detailed system requirements for the KPLO mission, particularly for the spacecraft bus itself and for the flight dynamics subsystem at the ground control center, are expected to be prepared and established based on the current results, including a contingency trajectory design plan.
NASA Astrophysics Data System (ADS)
Vogel, Andreas; Durant, Adam; Sytchkova, Anna; Diplas, Spyros; Bonadonna, Costanza; Scarnato, Barbara; Krüger, Kirstin; Kylling, Arve; Kristiansen, Nina; Stohl, Andreas
2016-04-01
Explosive volcanic eruptions emit up to 50 wt.% (total erupted mass) of fine ash particles (<63 microns), which individually can have theoretical atmospheric lifetimes that span hours to days. Depending on the injection height, fine ash may be subsequently transported and dispersed by the atmosphere over 100s - 1000s km and can pose a major threat for aviation operations. Recent volcanic eruptions, such as the 2010 Icelandic Eyjafjallajökull event, illustrated how volcanic ash can severely impact commercial air traffic. In order to manage the threat, it is important to have accurate forecast information on the spatial extent and absolute quantity of airborne volcanic ash. Such forecasts are constrained by empirically-derived estimates of the volcanic source term and the nature of the constituent volcanic ash properties. Consequently, it is important to include a quantitative assessment of measurement uncertainties of ash properties to provide realistic ash forecast uncertainty. Currently, information on volcanic ash physicochemical and optical properties is derived from a small number of somewhat dated publications. In this study, we provide a reference dataset for physical (size distribution and shape), chemical (bulk vs. surface chemistry) and optical properties (complex refractive index in the UV-vis-NIR range) of a representative selection of volcanic ash samples from 10 different volcanic eruptions covering the full variability in silica content (40-75 wt.% SiO2). Through the combination of empirical analytical methods (e.g., image analysis, Energy Dispersive Spectroscopy, X-ray Photoelectron Spectroscopy, Transmission Electron Microscopy and UV/Vis/NIR/FTIR Spectroscopy) and theoretical models (e.g., Bruggeman effective medium approach), it was possible to fully capture the natural variability of ash physicochemical and optical characteristics. The dataset will be applied in atmospheric measurement retrievals and atmospheric transport modelling to determine the sensitivity to uncertainty in ash particle characteristics.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
NASA Astrophysics Data System (ADS)
Boichu, Marie; Clarisse, Lieven; Khvorostyanov, Dmitry; Clerbaux, Cathy
2014-04-01
Forecasting the dispersal of volcanic clouds during an eruption is of primary importance, especially for ensuring aviation safety. As volcanic emissions are characterized by rapid variations of emission rate and height, the (generally) high level of uncertainty in the emission parameters represents a critical issue that limits the robustness of volcanic cloud dispersal forecasts. An inverse modeling scheme, combining satellite observations of the volcanic cloud with a regional chemistry-transport model, allows reconstructing this source term at high temporal resolution. We demonstrate here how a progressive assimilation of freshly acquired satellite observations, via such an inverse modeling procedure, allows for delivering robust sulfur dioxide (SO2) cloud dispersal forecasts during the eruption. This approach provides a computationally cheap estimate of the expected location and mass loading of volcanic clouds, including the identification of SO2-rich parts.
McGavran, P D; Rood, A S; Till, J E
1999-01-01
Beryllium was released into the air from routine operations and three accidental fires at the Rocky Flats Plant (RFP) in Colorado from 1958 to 1989. We evaluated environmental monitoring data and developed estimates of airborne concentrations and their uncertainties and calculated lifetime cancer risks and risks of chronic beryllium disease to hypothetical receptors. This article discusses exposure-response relationships for lung cancer and chronic beryllium disease. We assigned a distribution to cancer slope factor values based on the relative risk estimates from an occupational epidemiologic study used by the U.S. Environmental Protection Agency (EPA) to determine the slope factors. We used the regional atmospheric transport code for Hanford emission tracking atmospheric transport model for exposure calculations because it is particularly well suited for long-term annual-average dispersion estimates and it incorporates spatially varying meteorologic and environmental parameters. We accounted for model prediction uncertainty by using several multiplicative stochastic correction factors that accounted for uncertainty in the dispersion estimate, the meteorology, deposition, and plume depletion. We used Monte Carlo techniques to propagate model prediction uncertainty through to the final risk calculations. We developed nine exposure scenarios of hypothetical but typical residents of the RFP area to consider the lifestyle, time spent outdoors, location, age, and sex of people who may have been exposed. We determined geometric mean incremental lifetime cancer incidence risk estimates for beryllium inhalation for each scenario. The risk estimates were < 10(-6). Predicted air concentrations were well below the current reference concentration derived by the EPA for beryllium sensitization. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 PMID:10464074
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
AIR DISPERSION MODELING AT THE WASTE ISOLATION PILOT PLANT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucker, D.F.
2000-08-01
One concern at the Waste Isolation Pilot Plant (WIPP) is the amount of alpha-emitting radionuclides or hazardous chemicals that can become airborne at the facility and reach the Exclusive Use Area boundary as the result of a release from the Waste Handling Building (WHB) or from the underground during waste emplacement operations. The WIPP Safety Analysis Report (SAR), WIPP RCRA Permit, and WIPP Emergency Preparedness Hazards Assessments include air dispersion calculations to address this issue. Meteorological conditions at the WIPP facility will dictate direction, speed, and dilution of a contaminant plume of respirable material due to chronic releases or duringmore » an accident. Due to the paucity of meteorological information at the WIPP site prior to September 1996, the Department of Energy (DOE) reports had to rely largely on unqualified climatic data from the site and neighboring Carlsbad, which is situated approximately 40 km (26 miles) to the west of the site. This report examines the validity of the DOE air dispersion calculations using new meteorological data measured and collected at the WIPP site since September 1996. The air dispersion calculations in this report include both chronic and acute releases. Chronic release calculations were conducted with the EPA-approved code, CAP88PC and the calculations showed that in order for a violation of 40 CFR61 (NESHAPS) to occur, approximately 15 mCi/yr of 239Pu would have to be released from the exhaust stack or from the WHB. This is an extremely high value. Hence, it is unlikely that NESHAPS would be violated. A site-specific air dispersion coefficient was evaluated for comparison with that used in acute dose calculations. The calculations presented in Section 3.2 and 3.3 show that one could expect a slightly less dispersive plume (larger air dispersion coefficient) given greater confidence in the meteorological data, i.e. 95% worst case meteorological conditions. Calculations show that dispersion will decrease slightly if a more stable wind class is assumed, where very little vertical mixing occurs. It is recommended that previous reports which used fixed values for calculating the air dispersion coefficient be updated to reflect the new meteorological data, such as the WIPP Safety Analysis Report and the WIPP Emergency Preparedness Hazards Assessment. It is also recommended that uncertainty be incorporated into the calculations so that a more meaningful assessment of risk during accidents can be achieved.« less
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
NASA Astrophysics Data System (ADS)
Tsai, M.; Lee, C.; Yu, H.
2013-12-01
In the last 20 years, the Yunlin offshore industrial park has significantly contributed to the economic development of Taiwan. Its annual production value has reached almost 12 % of Taiwan's GDP in 2012. The offshore industrial park also balanced development of urban and rural in areas. However, the offshore industrial park is considered the major source of air pollution to nearby counties, especially, the emission of Volatile Organic Compounds(VOCs). Studies have found that exposures to high level of some VOCs have caused adverse health effects on both human and ecosystem. Since both health and ecological effects of air pollution have been the subject of numerous studies in recent years, it is a critical issue in estimating VOCs emissions. Nowadays emission estimation techniques are usually used emissions factors in calculation. Because the methodology considered totality of equipment activities based on statistical assumptions, it would encounter great uncertainty between these coefficients. This study attempts to estimate VOCs emission of the Yunlin Offshore Industrial Park using an inverse atmospheric dispersion model. The inverse modeling approach will be applied to the combination of dispersion modeling result which input a given one-unit concentration and observations at air quality stations in Yunlin. The American Meteorological Society-Environmental Protection Agency Regulatory Model (AERMOD) is chosen as the tool for dispersion modeling in the study. Observed concentrations of VOCs are collected by the Taiwanese Environmental Protection Administration (TW EPA). In addition, the study also analyzes meteorological data including wind speed, wind direction, pressure and temperature etc. VOCs emission estimations from the inverse atmospheric dispersion model will be compared to the official statistics released by Yunlin Offshore Industrial Park. Comparison of estimated concentration from inverse dispersion modeling and official statistical concentrations will give a better understanding about the uncertainty of regulatory methodology. The model results will be discussed with the importance of evaluating air pollution exposure in risk assessment.
Uncertainties in prescribed fire emissions and their impact on smoke dispersion predictions
Prescribed burning (PB) is practiced throughout the Southeastern U.S. for its important ecological and safety benefits such as preparing for tree seeding and planting, controlling disease and tree competition, managing understory debris, perpetuating fire-dependent plant species,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
Inter-model variability in hydrological extremes projections for Amazonian sub-basins
NASA Astrophysics Data System (ADS)
Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier
2014-05-01
Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.
Uncertainty estimation of water levels for the Mitch flood event in Tegucigalpa
NASA Astrophysics Data System (ADS)
Fuentes Andino, D. C.; Halldin, S.; Lundin, L.; Xu, C.
2012-12-01
Hurricane Mitch in 1998 left a devastating flood in Tegucigalpa, the capital city of Honduras. Simulation of elevated water surfaces provides a good way to understand the hydraulic mechanism of large flood events. In this study the one-dimensional HEC-RAS model for steady flow conditions together with the two-dimensional Lisflood-fp model were used to estimate the water level for the Mitch event in the river reaches at Tegucigalpa. Parameters uncertainty of the model was investigated using the generalized likelihood uncertainty estimation (GLUE) framework. Because of the extremely large magnitude of the Mitch flood, no hydrometric measurements were taken during the event. However, post-event indirect measurements of discharge and observed water levels were obtained in previous works by JICA and USGS. To overcome the problem of lacking direct hydrometric measurement data, uncertainty in the discharge was estimated. Both models could well define the value for channel roughness, though more dispersion resulted from the floodplain value. Analysis of the data interaction showed that there was a tradeoff between discharge at the outlet and floodplain roughness for the 1D model. The estimated discharge range at the outlet of the study area encompassed the value indirectly estimated by JICA, however the indirect method used by the USGS overestimated the value. If behavioral parameter sets can well reproduce water surface levels for past events such as Mitch, more reliable predictions for future events can be expected. The results acquired in this research will provide guidelines to deal with the problem of modeling past floods when no direct data was measured during the event, and to predict future large events taking uncertainty into account. The obtained range of the uncertain flood extension will be an outcome useful for decision makers.
Stenemo, Fredrik; Jarvis, Nicholas
2007-09-01
A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry
NASA Astrophysics Data System (ADS)
Strigari, Louis E.; Frenk, Carlos S.; White, Simon D. M.
2018-06-01
We compare the transverse velocity dispersions recently measured within the Sculptor dwarf spheroidal galaxy to the predictions of our previously published dynamical model. This was designed to fit the observed number count and velocity dispersion profiles of both metal-rich and metal-poor stars, both in cored and in cusped potentials. At the projected radius where the proper motions (PMs) were measured, this model (with no change in parameters) predicts transverse dispersions in the range of 6–9.5 km s‑1, with the tangential dispersion about 1 km s‑1 larger than the (projected) radial dispersion. Both dispersions are predicted to be about 1 km s‑1 larger for metal-poor than for metal-rich stars. At this projected radius, cored and cusped potentials predict almost identical transverse dispersions. The measured tangential dispersion (8.5 ± 3.2 km s‑1) agrees remarkably well with these predictions, while the measured radial dispersion (11.5 ± 4.3 km s‑1) differs only at about the 1σ level. Thus, the PM data are in excellent agreement with previous data, but do not help to distinguish between cored and cusped potentials. This will require velocity dispersion data (either from PMs or from radial velocities) with uncertainties well below 1 km s‑1 over a range of projected radii.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Shu, Ruiwen, E-mail: rshu2@math.wisc.edu
In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operatormore » that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.« less
The Relation Between Black Hole Mass and Velocity Dispersion at z ~ 0.37
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treu, T.
2004-10-25
The velocity dispersion of 7 Seyfert 1 galaxies at z {approx} 0.37 is measured using high signal-to-noise Keck spectra. Black hole (BH) mass estimates are obtained via an empirically calibrated photoionization method. We derive the BH mass velocity dispersion relationship at z {approx} 0.37. We find an offset with respect to the local relationship, in the sense of somewhat lower velocity dispersion at a fixed BH mass at z {approx} 0.37 than today, significant at the 97% level. The offset corresponds to {Delta}log {sigma} = -0.16 with rms scatter of 0.13 dex. If confirmed by larger samples and independent checksmore » on systematic uncertainties and selection effects, this result would be consistent with spheroids evolving faster than BHs in the past 4 Gyrs and inconsistent with pure luminosity evolution.« less
Numerical and probabilistic analysis of asteroid and comet impact hazard mitigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plesko, Catherine S; Weaver, Robert P; Huebner, Walter F
2010-09-09
The possibility of asteroid and comet impacts on Earth has received significant recent media and scientific attention. Still, there are many outstanding questions about the correct response once a potentially hazardous object (PHO) is found. Nuclear munitions are often suggested as a deflection mechanism because they have a high internal energy per unit launch mass. However, major uncertainties remain about the use of nuclear munitions for hazard mitigation. There are large uncertainties in a PHO's physical response to a strong deflection or dispersion impulse like that delivered by nuclear munitions. Objects smaller than 100 m may be solid, and objectsmore » at all sizes may be 'rubble piles' with large porosities and little strength. Objects with these different properties would respond very differently, so the effects of object properties must be accounted for. Recent ground-based observations and missions to asteroids and comets have improved the planetary science community's understanding of these objects. Computational power and simulation capabilities have improved such that it is possible to numerically model the hazard mitigation problem from first principles. Before we know that explosive yield Y at height h or depth -h from the target surface will produce a momentum change in or dispersion of a PHO, we must quantify energy deposition into the system of particles that make up the PHO. Here we present the initial results of a parameter study in which we model the efficiency of energy deposition from a stand-off nuclear burst onto targets made of PHO constituent materials.« less
Consensus building for interlaboratory studies, key comparisons, and meta-analysis
NASA Astrophysics Data System (ADS)
Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza
2017-06-01
Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian procedure; and the Linear Pool. These three procedures have been implemented and made widely accessible in a Web-based application (NIST Consensus Builder). We illustrate principles, statistical models, and data reduction procedures in four examples: (i) the measurement of the Newtonian constant of gravitation; (ii) the measurement of the half-lives of radioactive isotopes of caesium and strontium; (iii) the comparison of two alternative treatments for carotid artery stenosis; and (iv) a key comparison where the measurand was the calibration factor of a radio-frequency power sensor.
Implication of Broadband Dispersion Measurements in Constraining Upper Mantle Velocity Structures
NASA Astrophysics Data System (ADS)
Kuponiyi, A.; Kao, H.; Cassidy, J. F.; Darbyshire, F. A.; Dosso, S. E.; Gosselin, J. M.; Spence, G.
2017-12-01
Dispersion measurements from earthquake (EQ) data are traditionally inverted to obtain 1-D shear-wave velocity models, which provide information on deep earth structures. However, in many cases, EQ-derived dispersion measurements lack short-period information, which theoretically should provide details of shallow structures. We show that in at least some cases short-period information, such as can be obtained from ambient seismic noise (ASN) processing, must be combined with EQ dispersion measurements to properly constrain deeper (e.g. upper-mantle) structures. To verify this, synthetic dispersion data are generated using hypothetical velocity models under four scenarios: EQ only (with and without deep low-velocity layers) and combined EQ and ASN data (with and without deep low-velocity layers). The now "broadband" dispersion data are inverted using a trans-dimensional Bayesian framework with the aim of recovering the initial velocity models and assessing uncertainties. Our results show that the deep low-velocity layer could only be recovered from the inversion of the combined ASN-EQ dispersion measurements. Given this result, we proceed to describe a method for obtaining reliable broadband dispersion measurements from both ASN and EQ and show examples for real data. The implication of this study in the characterization of lithospheric and upper mantle structures, such as the Lithosphere-Asthenosphere Boundary (LAB), is also discussed.
Uter, Wolfgang; Hildebrandt, Stephan; Geier, Johannes; Schnuch, Axel; Lessmann, Holger
2007-10-01
Disperse blue (DB) 106 and 124 are important textile dye allergens. However, the dye raw material is impure, leading to uncertainty regarding the actual patch test (PT) concentration. To examine, (i) the allergen content of previously and currently used DB 106 and 124 and a respective mix, and (ii) the frequency of positive PT reactions to the DB 106/124 mix and to the single compounds in consecutive PT patients. High performance liquid chromatography (HPLC) analysis and purification of DB 106 and 124, respectively. Descriptive analysis of PT data from the Information network of departments of dermatology obtained between January 2003 and December 2005. Retrospectively, 2 batches of the DB 106/124 mix proved to contain an amount of allergen different to the 1 declared (based on information of suppliers of raw material). However, since February 2005, DB 106 and 124, respectively, are available at a reliable concentration of 0.3% petrolatum. In 2005, the prevalence of positive PT reactions to both the mix (0.89%) and the single constituents combined (0.56%) did not qualify them for inclusion in the standard series. Quality control, providing accurate test concentrations of allergens based on technical grade purity raw materials is necessary for valid diagnosis of contact allergy and comparable epidemiological data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alarcon, J. M.; Weiss, C.
We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less
Alarcon, J. M.; Weiss, C.
2018-05-08
We study the nucleon electromagnetic form factors (EM FFs) using a recently developed method combining Chiral Effective Field Theory (more » $$\\chi$$EFT) and dispersion analysis. The spectral functions on the two-pion cut at $$t > 4 M_\\pi^2$$ are constructed using the elastic unitarity relation and an $N/D$ representation. $$\\chi$$EFT is used to calculate the real unctions $$J_\\pm^1 (t) = f_\\pm^1(t)/F_\\pi(t)$$ (ratios of the complex $$\\pi\\pi \\rightarrow N \\bar N$$ partial-wave amplitudes and the timelike pion FF), which are free of $$\\pi\\pi$$ rescattering. Rescattering effects are included through the empirical timelike pion FF $$|F_\\pi(t)|^2$$. The method allows us to compute the isovector EM spectral functions up to $$t \\sim 1$$ GeV$^2$ with controlled accuracy (LO, NLO, and partial N2LO). With the spectral functions we calculate the isovector nucleon EM FFs and their derivatives at $t = 0$ (EM radii, moments) using subtracted dispersion relations. We predict the values of higher FF derivatives with minimal uncertainties and explain their collective behavior. Finally, we estimate the individual proton and neutron FFs by adding an empirical parametrization of the isoscalar sector. Excellent agreement with the present low-$Q^2$ FF data is achieved up to $$\\sim$$0.5 GeV$^2$ for $$G_E$$, and up to $$\\sim$$0.2 GeV$^2$ for $$G_M$$. Our results can be used to guide the analysis of low-$Q^2$ elastic scattering data and the extraction of the proton charge radius.« less
NASA Astrophysics Data System (ADS)
Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.
2014-03-01
This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly time scale. Prior uncertainties are derived on a weekly time scale for biosphere fluxes and fossil fuel emissions from high resolution BIOS2 model runs and from the Fossil Fuel Data Assimilation System (FFDAS), respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimization scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50% we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.
NASA Astrophysics Data System (ADS)
Ziehn, T.; Nickless, A.; Rayner, P. J.; Law, R. M.; Roff, G.; Fraser, P.
2014-09-01
This paper describes the generation of optimal atmospheric measurement networks for determining carbon dioxide fluxes over Australia using inverse methods. A Lagrangian particle dispersion model is used in reverse mode together with a Bayesian inverse modelling framework to calculate the relationship between weekly surface fluxes, comprising contributions from the biosphere and fossil fuel combustion, and hourly concentration observations for the Australian continent. Meteorological driving fields are provided by the regional version of the Australian Community Climate and Earth System Simulator (ACCESS) at 12 km resolution at an hourly timescale. Prior uncertainties are derived on a weekly timescale for biosphere fluxes and fossil fuel emissions from high-resolution model runs using the Community Atmosphere Biosphere Land Exchange (CABLE) model and the Fossil Fuel Data Assimilation System (FFDAS) respectively. The influence from outside the modelled domain is investigated, but proves to be negligible for the network design. Existing ground-based measurement stations in Australia are assessed in terms of their ability to constrain local flux estimates from the land. We find that the six stations that are currently operational are already able to reduce the uncertainties on surface flux estimates by about 30%. A candidate list of 59 stations is generated based on logistic constraints and an incremental optimisation scheme is used to extend the network of existing stations. In order to achieve an uncertainty reduction of about 50%, we need to double the number of measurement stations in Australia. Assuming equal data uncertainties for all sites, new stations would be mainly located in the northern and eastern part of the continent.
NASA Astrophysics Data System (ADS)
Li, Bo; Rui, Xiaoting
2018-01-01
Poor dispersion characteristics of rockets due to the vibration of Multiple Launch Rocket System (MLRS) have always restricted the MLRS development for several decades. Vibration control is a key technique to improve the dispersion characteristics of rockets. For a mechanical system such as MLRS, the major difficulty in designing an appropriate control strategy that can achieve the desired vibration control performance is to guarantee the robustness and stability of the control system under the occurrence of uncertainties and nonlinearities. To approach this problem, a computed torque controller integrated with a radial basis function neural network is proposed to achieve the high-precision vibration control for MLRS. In this paper, the vibration response of a computed torque controlled MLRS is described. The azimuth and elevation mechanisms of the MLRS are driven by permanent magnet synchronous motors and supposed to be rigid. First, the dynamic model of motor-mechanism coupling system is established using Lagrange method and field-oriented control theory. Then, in order to deal with the nonlinearities, a computed torque controller is designed to control the vibration of the MLRS when it is firing a salvo of rockets. Furthermore, to compensate for the lumped uncertainty due to parametric variations and un-modeled dynamics in the design of the computed torque controller, a radial basis function neural network estimator is developed to adapt the uncertainty based on Lyapunov stability theory. Finally, the simulated results demonstrate the effectiveness of the proposed control system and show that the proposed controller is robust with regard to the uncertainty.
Naranjo, Ramon C.
2013-01-01
Biochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes that are in contact with sediments of the riverbed. In this investigation, we developed a 2-D longitudinal flow and solute-transport model to estimate the spatial distribution of mean residence time in the hyporheic zone. The flow model was calibrated using observations of temperature and pressure, and the mean residence times were simulated using the age-mass approach for steady-state flow conditions. The approach used in this investigation includes the mixing of different ages and flow paths of water through advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte Carlo and the generalized likelihood uncertainty estimation method. Results of parameter estimation support the presence of a low-permeable zone in the riffle area that induced horizontal flow at a shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. For the optimal model, mean residence times were found to be relatively long (9–40.0 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range (IQR) of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean IQR of 2.2 and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence.
Short-time fractional Fourier methods for the time-frequency representation of chirp signals.
Capus, Chris; Brown, Keith
2003-06-01
The fractional Fourier transform (FrFT) provides a valuable tool for the analysis of linear chirp signals. This paper develops two short-time FrFT variants which are suited to the analysis of multicomponent and nonlinear chirp signals. Outputs have similar properties to the short-time Fourier transform (STFT) but show improved time-frequency resolution. The FrFT is a parameterized transform with parameter, a, related to chirp rate. The two short-time implementations differ in how the value of a is chosen. In the first, a global optimization procedure selects one value of a with reference to the entire signal. In the second, a values are selected independently for each windowed section. Comparative variance measures based on the Gaussian function are given and are shown to be consistent with the uncertainty principle in fractional domains. For appropriately chosen FrFT orders, the derived fractional domain uncertainty relationship is minimized for Gaussian windowed linear chirp signals. The two short-time FrFT algorithms have complementary strengths demonstrated by time-frequency representations for a multicomponent bat chirp, a highly nonlinear quadratic chirp, and an output pulse from a finite-difference sonar model with dispersive change. These representations illustrate the improvements obtained in using FrFT based algorithms compared to the STFT.
Stellar Velocity Dispersion: Linking Quiescent Galaxies to Their Dark Matter Halos
NASA Astrophysics Data System (ADS)
Zahid, H. Jabran; Sohn, Jubee; Geller, Margaret J.
2018-06-01
We analyze the Illustris-1 hydrodynamical cosmological simulation to explore the stellar velocity dispersion of quiescent galaxies as an observational probe of dark matter halo velocity dispersion and mass. Stellar velocity dispersion is proportional to dark matter halo velocity dispersion for both central and satellite galaxies. The dark matter halos of central galaxies are in virial equilibrium and thus the stellar velocity dispersion is also proportional to dark matter halo mass. This proportionality holds even when a line-of-sight aperture dispersion is calculated in analogy to observations. In contrast, at a given stellar velocity dispersion, the dark matter halo mass of satellite galaxies is smaller than virial equilibrium expectations. This deviation from virial equilibrium probably results from tidal stripping of the outer dark matter halo. Stellar velocity dispersion appears insensitive to tidal effects and thus reflects the correlation between stellar velocity dispersion and dark matter halo mass prior to infall. There is a tight relation (≲0.2 dex scatter) between line-of-sight aperture stellar velocity dispersion and dark matter halo mass suggesting that the dark matter halo mass may be estimated from the measured stellar velocity dispersion for both central and satellite galaxies. We evaluate the impact of treating all objects as central galaxies if the relation we derive is applied to a statistical ensemble. A large fraction (≳2/3) of massive quiescent galaxies are central galaxies and systematic uncertainty in the inferred dark matter halo mass is ≲0.1 dex thus simplifying application of the simulation results to currently available observations.
Action Learning, Performativity and Negative Capability
ERIC Educational Resources Information Center
Edmonstone, John
2016-01-01
The paper examines the concept of negative capability as a human capacity for containment and contrasts it with well-valued positive capability as expressed through performativity in organisations and society. It identifies the problem of dispersal--the complex ways we behave in order to avoid the emotional challenges of living with uncertainty.…
A review of the generalized uncertainty principle.
Tawfik, Abdel Nasser; Diab, Abdel Magied
2015-12-01
Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.
Stochastic Analysis and Design of Heterogeneous Microstructural Materials System
NASA Astrophysics Data System (ADS)
Xu, Hongyi
Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the dimension of microstructure design space. This dissertation develops a machine learning-based methodology to identify the key microstructure descriptors that highly impact properties of interest. In uncertainty quantification, a comparative study on data-driven random process models is conducted to provide guidance for choosing the most accurate model in statistical uncertainty quantification. Two new goodness-of-fit metrics are developed to provide quantitative measurements of random process models' accuracy. The benefits of the proposed methods are demonstrated by the example of designing the microstructure of polymer nanocomposites. This dissertation provides material-generic, intelligent modeling/design methodologies and techniques to accelerate the process of analyzing and designing new microstructural materials system.
Adaptive change in corporate control practices.
Alexander, J A
1991-03-01
Multidivisional organizations are not concerned with what structure to adopt but with how they should exercise control within the divisional form to achieve economic efficiencies. Using an information-processing framework, I examined control arrangements between the headquarters and operating divisions of such organizations and how managers adapted control practices to accommodate increasing environmental uncertainty. Also considered were the moderating effects of contextual attributes on such adaptive behavior. Analyses of panel data from 97 multihospital systems suggested that organizations generally practice selective decentralization under conditions of increasing uncertainty but that organizational age, dispersion, and initial control arrangements significantly moderate the direction and magnitude of such changes.
Voyager 1 Saturn targeting strategy
NASA Technical Reports Server (NTRS)
Cesarone, R. J.
1980-01-01
A trajectory targeting strategy for the Voyager 1 Saturn encounter has been designed to accomodate predicted uncertainties in Titan's ephemeris while maximizing spacecraft safety and science return. The encounter is characterized by a close Titan flyby 18 hours prior to Saturn periapse. Retargeting of the nominal trajectory to account for late updates in Titan's estimated position can disperse the ascending node location, which is nominally situated at a radius of low expected particle density in Saturn's ring plane. The strategy utilizes a floating Titan impact vector magnitude to minimize this dispersion. Encounter trajectory characteristics and optimal tradeoffs are presented.
Dispersion relations with crossing symmetry for {pi}{pi}D- and F1-wave amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaminski, R.
Results of implementation of dispersion relations with imposed crossing symmetry condition to description of {pi}{pi}D and F1 wave amplitudes are presented. We use relations with only one subtraction what leads to small uncertainties of results and to strong constraints for tested {pi}{pi} amplitudes. Presented equations are similar to those with one subtraction (so called GKPY equations) and to those with two subtractions (the Roy's equations) for the S and P waves. Numerical calculations are done with the S and P wave input amplitudes tested already with use of the Roy's and GKPY equations.
Zhou, Xiuru; Ye, Weili; Zhang, Bing
2016-03-01
Transaction costs and uncertainty are considered to be significant obstacles in the emissions trading market, especially for including nonpoint source in water quality trading. This study develops a nonlinear programming model to simulate how uncertainty and transaction costs affect the performance of point/nonpoint source (PS/NPS) water quality trading in the Lake Tai watershed, China. The results demonstrate that PS/NPS water quality trading is a highly cost-effective instrument for emissions abatement in the Lake Tai watershed, which can save 89.33% on pollution abatement costs compared to trading only between nonpoint sources. However, uncertainty can significantly reduce the cost-effectiveness by reducing trading volume. In addition, transaction costs from bargaining and decision making raise total pollution abatement costs directly and cause the offset system to deviate from the optimal state. While proper investment in monitoring and measuring of nonpoint emissions can decrease uncertainty and save on the total abatement costs. Finally, we show that the dispersed ownership of China's farmland will bring high uncertainty and transaction costs into the PS/NPS offset system, even if the pollution abatement cost is lower than for point sources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Remotely Sensed Data for High Resolution Agro-Environmental Policy Analysis
NASA Astrophysics Data System (ADS)
Welle, Paul
Policy analyses of agricultural and environmental systems are often limited due to data constraints. Measurement campaigns can be costly, especially when the area of interest includes oceans, forests, agricultural regions or other dispersed spatial domains. Satellite based remote sensing offers a way to increase the spatial and temporal resolution of policy analysis concerning these systems. However, there are key limitations to the implementation of satellite data. Uncertainty in data derived from remote-sensing can be significant, and traditional methods of policy analysis for managing uncertainty on large datasets can be computationally expensive. Moreover, while satellite data can increasingly offer estimates of some parameters such as weather or crop use, other information regarding demographic or economic data is unlikely to be estimated using these techniques. Managing these challenges in practical policy analysis remains a challenge. In this dissertation, I conduct five case studies which rely heavily on data sourced from orbital sensors. First, I assess the magnitude of climate and anthropogenic stress on coral reef ecosystems. Second, I conduct an impact assessment of soil salinity on California agriculture. Third, I measure the propensity of growers to adapt their cropping practices to soil salinization in agriculture. Fourth, I analyze whether small-scale desalination units could be applied on farms in California in order mitigate the effects of drought and salinization as well as prevent agricultural drainage from entering vulnerable ecosystems. And fifth, I assess the feasibility of satellite-based remote sensing for salinity measurement at global scale. Through these case studies, I confront both the challenges and benefits associated with implementing satellite based-remote sensing for improved policy analysis.
Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2015-04-01
Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.
NASA Astrophysics Data System (ADS)
Yin, X.; Xia, J.; Xu, H.
2016-12-01
Rayleigh and Love waves are two types of surface waves that travel along a free surface.Based on the assumption of horizontal layered homogenous media, Rayleigh-wave phase velocity can be defined as a function of frequency and four groups of earth parameters: P-wave velocity, SV-wave velocity, density and thickness of each layer. Unlike Rayleigh waves, Love-wave phase velocities of a layered homogenous earth model could be calculated using frequency and three groups of earth properties: SH-wave velocity, density, and thickness of each layer. Because the dispersion of Love waves is independent of P-wave velocities, Love-wave dispersion curves are much simpler than Rayleigh wave. The research of joint inversion methods of Rayleigh and Love dispersion curves is necessary. (1) This dissertation adopts the combinations of theoretical analysis and practical applications. In both lateral homogenous media and radial anisotropic media, joint inversion approaches of Rayleigh and Love waves are proposed to improve the accuracy of S-wave velocities.A 10% random white noise and a 20% random white noise are added to the synthetic dispersion curves to check out anti-noise ability of the proposed joint inversion method.Considering the influences of the anomalous layer, Rayleigh and Love waves are insensitive to those layers beneath the high-velocity layer or low-velocity layer and the high-velocity layer itself. Low sensitivities will give rise to high degree of uncertainties of the inverted S-wave velocities of these layers. Considering that sensitivity peaks of Rayleigh and Love waves separate at different frequency ranges, the theoretical analyses have demonstrated that joint inversion of these two types of waves would probably ameliorate the inverted model.The lack of surface-wave (Rayleigh or Love waves) dispersion data may lead to inaccuracy S-wave velocities through the single inversion of Rayleigh or Love waves, so this dissertation presents the joint inversion method of Rayleigh and Love waves which will improve the accuracy of S-wave velocities. Finally, a real-world example is applied to verify the accuracy and stability of the proposed joint inversion method. Keywords: Rayleigh wave; Love wave; Sensitivity analysis; Joint inversion method.
Experiments on Nucleation in Different Flow Regimes
NASA Technical Reports Server (NTRS)
Bayuzick, Robert J.
1999-01-01
The vast majority of metallic engineering materials are solidified from the liquid phase. Understanding the solidification process is essential to control microstructure, which in turn, determines the properties of materials. The genesis of solidification is nucleation, where the first stable solid forms from the liquid phase. Nucleation kinetics determine the degree of undercooling and phase selection. As such, it is important to understand nucleation phenomena in order to control solidification or glass formation in metals and alloys. Early experiments in nucleation kinetics were accomplished by droplet dispersion methods [1-6]. Dilitometry was used by Turnbull and others, and more recently differential thermal analysis and differential scanning calorimetry have been used for kinetic studies. These techniques have enjoyed success; however, there are difficulties with these experiments. Since materials are dispersed in a medium, the character of the emulsion/metal interface affects the nucleation behavior. Statistics are derived from the large number of particles observed in a single experiment, but dispersions have a finite size distribution which adds to the uncertainty of the kinetic determinations. Even though temperature can be controlled quite well before the onset of nucleation, the release of the latent heat of fusion during nucleation of particles complicates the assumption of isothermality during these experiments. Containerless processing has enabled another approach to the study of nucleation kinetics [7]. With levitation techniques it is possible to undercool one sample to nucleation repeatedly in a controlled manner, such that the statistics of the nucleation process can be derived from multiple experiments on a single sample. The authors have fully developed the analysis of nucleation experiments on single samples following the suggestions of Skripov [8]. The advantage of these experiments is that the samples are directly observable. The nucleation temperature can be measured by noncontact optical pyrometry, the mass of the sample is known, and post processing analysis can be conducted on the sample. The disadvantages are that temperature measurement must have exceptionally high precision, and it is not possible to isolate specific heterogeneous sites as in droplet dispersions.
NASA Astrophysics Data System (ADS)
Andres Rodriguez, Daniel; Garofolo, Lucas; Lazaro Siqueira Junior, Jose
2013-04-01
Uncertainties in Climate Change projections are affected by irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process. Such uncertainties affect the impact studies, complicating the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. Through these kinds of analyses it is possible to identify critical issues, which must be deeper studied. For this study we used several future's projections from General Circulation Models to feed a Hydrological Model, applied to the Amazonian sub-basin of Ji-Paraná. Hydrological Model integrations are performed for present historical time (1970-1990) and for future period (2010-2100). Extreme values analyses are performed to each simulated time series and results are compared with extremes events in present time. A simple approach to identify potential vulnerabilities consists of evaluating the hydrologic system response to climate variability and extreme events observed in the past, comparing them with the conditions projected for the future. Thus it is possible to identify critical issues that need attention and more detailed studies. For the goal of this work, we used socio-economic data from Brazilian Institute of Geography and Statistics, the Operator of the National Electric System, the Brazilian National Water Agency and scientific and press published information. This information is used to characterize impacts associated to extremes hydrological events in the basin during the present historical time and to evaluate potential impacts in the future face to the different hydrological projections. Results show inter-model variability results in a broad dispersion on projected extreme's values. The impact of such dispersion is differentiated for different aspects of socio-economic and natural systems and must be carefully addressed in order to help in decision-making processes.
Estimates of CO2 fluxes over the city of Cape Town, South Africa, through Bayesian inverse modelling
NASA Astrophysics Data System (ADS)
Nickless, Alecia; Rayner, Peter J.; Engelbrecht, Francois; Brunke, Ernst-Günther; Erni, Birgit; Scholes, Robert J.
2018-04-01
We present a city-scale inversion over Cape Town, South Africa. Measurement sites for atmospheric CO2 concentrations were installed at Robben Island and Hangklip lighthouses, located downwind and upwind of the metropolis. Prior estimates of the fossil fuel fluxes were obtained from a bespoke inventory analysis where emissions were spatially and temporally disaggregated and uncertainty estimates determined by means of error propagation techniques. Net ecosystem exchange (NEE) fluxes from biogenic processes were obtained from the land atmosphere exchange model CABLE (Community Atmosphere Biosphere Land Exchange). Uncertainty estimates were based on the estimates of net primary productivity. CABLE was dynamically coupled to the regional climate model CCAM (Conformal Cubic Atmospheric Model), which provided the climate inputs required to drive the Lagrangian particle dispersion model. The Bayesian inversion framework included a control vector where fossil fuel and NEE fluxes were solved for separately.Due to the large prior uncertainty prescribed to the NEE fluxes, the current inversion framework was unable to adequately distinguish between the fossil fuel and NEE fluxes, but the inversion was able to obtain improved estimates of the total fluxes within pixels and across the domain. The median of the uncertainty reductions of the total weekly flux estimates for the inversion domain of Cape Town was 28 %, but reach as high as 50 %. At the pixel level, uncertainty reductions of the total weekly flux reached up to 98 %, but these large uncertainty reductions were for NEE-dominated pixels. Improved corrections to the fossil fuel fluxes would be possible if the uncertainty around the prior NEE fluxes could be reduced. In order for this inversion framework to be operationalised for monitoring, reporting, and verification (MRV) of emissions from Cape Town, the NEE component of the CO2 budget needs to be better understood. Additional measurements of Δ14C and δ13C isotope measurements would be a beneficial component of an atmospheric monitoring programme aimed at MRV of CO2 for any city which has significant biogenic influence, allowing improved separation of contributions from NEE and fossil fuel fluxes to the observed CO2 concentration.
Transverse charge and magnetization densities: Improved chiral predictions down to b=1 fms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alarcon, Jose Manuel; Hiller Blin, Astrid N.; Vicente Vacas, Manuel J.
The transverse charge and magnetization densities provide insight into the nucleon’s inner structure. In the periphery, the isovector components are clearly dominant, and can be computed in a model-independent way by means of a combination of chiral effective field theory (cEFT) and dispersion analysis. With a novel N=D method, we incorporate the pion electromagnetic formfactor data into the cEFT calculation, thus taking into account the pion-rescattering effects and r-meson pole. As a consequence, we are able to reliably compute the densities down to distances b1 fm, therefore achieving a dramatic improvement of the results compared to traditional cEFT calculations, whilemore » remaining predictive and having controlled uncertainties.« less
NASA Astrophysics Data System (ADS)
Park, Seunghoon; Joung, Sungyeop; Park, Jerry AB(; ), AC(; )
2018-01-01
Assay of L-series of nuclear material solution is useful for determination of amount of nuclear materials and ratio of minor actinide in the materials. The hybrid system of energy dispersive X-ray absorption edge spectrometry, i.e. L-edge densitometry, and X-ray fluorescence spectrometry is one of the analysis methods. The hybrid L-edge/XRF densitometer can be a promising candidate for a portable and compact equipment due to advantage of using low energy X-ray beams without heavy shielding systems and liquid nitrogen cooling compared to hybrid K-edge/XRF densitometer. A prototype of the equipment was evaluated for feasibility of the nuclear material assay using a surrogate material (lead) to avoid radiation effects from nuclear materials. The uncertainty of L-edge and XRF characteristics of the sample material and volume effects was discussed in the article.
Lott, Casey A; Wiley, Robert L; Fischer, Richard A; Hartfield, Paul D; Scott, J Michael
2013-01-01
Interior Least Terns (Sternula antillarum) (ILT) are colonial, fish-eating birds that breed within active channels of large sand bed rivers of the Great Plains and in the Lower Mississippi Valley. Multipurpose dams, irrigation structures, and engineered navigation systems have been present on these rivers for many decades. Despite severe alteration of channels and flow regimes, regulation era floods have remained effective at maintaining bare sandbar nesting habitat on many river segments and ILT populations have been stable or expanding since they were listed as endangered in 1985. We used ILT breeding colony locations from 2002 to 2012 and dispersal information to identify 16 populations and 48 subpopulations. More than 90% of ILT and >83% of river km with suitable nesting habitat occur within the two largest populations. However, replicate populations remain throughout the entire historical, geophysical, and ecological range of ILT. Rapid colonization of anthropogenic habitats in areas that were not historically occupied suggests metapopulation dynamics. The highest likelihood of demographic connectivity among ILT populations occurs across the Southern Plains and the Lower Mississippi River, which may be demographically connected with Least Tern populations on the Gulf Coast. Paired ecological and bird population models are needed to test whether previously articulated threats limit ILT population growth and to determine if management intervention is necessary and where. Given current knowledge, the largest sources of model uncertainty will be: (1) uncertainty in relationships between high flow events and subsequent sandbar characteristics and (2) uncertainty regarding the frequency of dispersal among population subunits. We recommend research strategies to reduce these uncertainties. PMID:24223295
2010-09-30
and Ecosystems: An important community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval dispersal... Chile current system. J. Climate, submitted. Colas, F., X. Capet, and J. McWilliams, 2010b: Mesoscale eddy buoyancy flux and eddy-induced
NASA Astrophysics Data System (ADS)
Debry, Edouard; Mallet, Vivien; Garaud, Damien; Malherbe, Laure; Bessagnet, Bertrand; Rouïl, Laurence
2010-05-01
Prev'Air is the French operational system for air pollution forecasting. It is developed and maintained by INERIS with financial support from the French Ministry for Environment. On a daily basis it delivers forecasts up to three days ahead for ozone, nitrogene dioxide and particles over France and Europe. Maps of concentration peaks and daily averages are freely available to the general public. More accurate data can be provided to customers and modelers. Prev'Air forecasts are based on the Chemical Transport Model CHIMERE. French authorities rely more and more on this platform to alert the general public in case of high pollution events and to assess the efficiency of regulation measures when such events occur. For example the road speed limit may be reduced in given areas when the ozone level exceeds one regulatory threshold. These operational applications require INERIS to assess the quality of its forecasts and to sensitize end users about the confidence level. Indeed concentrations always remain an approximation of the true concentrations because of the high uncertainty on input data, such as meteorological fields and emissions, because of incomplete or inaccurate representation of physical processes, and because of efficiencies in numerical integration [1]. We would like to present in this communication the uncertainty analysis of the CHIMERE model led in the framework of an INERIS research project aiming, on the one hand, to assess the uncertainty of several deterministic models and, on the other hand, to propose relevant indicators describing air quality forecast and their uncertainty. There exist several methods to assess the uncertainty of one model. Under given assumptions the model may be differentiated into an adjoint model which directly provides the concentrations sensitivity to given parameters. But so far Monte Carlo methods seem to be the most widely and oftenly used [2,3] as they are relatively easy to implement. In this framework one probability density function (PDF) is associated with an input parameter, according to its assumed uncertainty. Then the combined PDFs are propagated into the model, by means of several simulations with randomly perturbed input parameters. One may then obtain an approximation of the PDF of modeled concentrations, provided the Monte Carlo process has reasonably converged. The uncertainty analysis with CHIMERE has been led with a Monte Carlo method on the French domain and on two periods : 13 days during January 2009, with a focus on particles, and 28 days during August 2009, with a focus on ozone. The results show that for the summer period and 500 simulations, the time and space averaged standard deviation for ozone is 16 µg/m3, to be compared with an averaged concentration of 89 µg/m3. It is noteworthy that the space averaged standard deviation for ozone is relatively constant over time (the standard deviation of the timeseries itself is 1.6 µg/m3). The space variation of the ozone standard deviation seems to indicate that emissions have a significant impact, followed by western boundary conditions. Monte Carlo simulations are then post-processed by both ensemble [4] and Bayesian [5] methods in order to assess the quality of the uncertainty estimation. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the Paris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Mallet, V., and B. Sportisse (2006), Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling, J. Geophys. Res., 111, D01302, doi:10.1029/2005JD006149. (5) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.
The mass-sheet degeneracy and time-delay cosmography: analysis of the strong lens RXJ1131-1231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birrer, Simon; Amara, Adam; Refregier, Alexandre, E-mail: simon.birrer@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch
We present extended modelling of the strong lens system RXJ1131-1231 with archival data in two HST bands in combination with existing line-of-sight contribution and velocity dispersion estimates. Our focus is on source size and its influence on time-delay cosmography. We therefore examine the impact of mass-sheet degeneracy and especially the degeneracy pointed out by Schneider and Sluse (2013) [1] using the source reconstruction scale. We also extend on previous work by further exploring the effects of priors on the kinematics of the lens and the external convergence in the environment of the lensing system. Our results coming from RXJ1131-1231 aremore » given in a simple analytic form so that they can be easily combined with constraints coming from other cosmological probes. We find that the choice of priors on lens model parameters and source size are subdominant for the statistical errors for H {sub 0} measurements of this systems. The choice of prior for the source is sub-dominant at present (2% uncertainty on H {sub 0}) but may be relevant for future studies. More importantly, we find that the priors on the kinematic anisotropy of the lens galaxy have a significant impact on our cosmological inference. When incorporating all the above modeling uncertainties, we find H {sub 0} = 86.6{sup +6.8}{sub -6.9} km s{sup -1} Mpc{sup -1}, when using kinematic priors similar to other studies. When we use a different kinematic prior motivated by Barnabè et al. (2012) [2] but covering the same anisotropic range, we find H {sub 0} = 74.5{sup +8.0}{sub -7.8} km s{sup -1} Mpc{sup -1}. This means that the choice of kinematic modeling and priors have a significant impact on cosmographic inferences. The way forward is either to get better velocity dispersion measures which would down weight the impact of the priors or to construct physically motivated priors for the velocity dispersion model.« less
SEVIRI 4D-var assimilation analysing the April 2010 Eyjafjallajökull ash dispersion
NASA Astrophysics Data System (ADS)
Lange, Anne Caroline; Elbern, Hendrik
2016-04-01
We present first results of four dimensional variational (4D-var) data assimilation analysis applying SEVIRI observations to the Eulerian regional chemistry and aerosol transport model EURAD-IM (European Air Pollution Dispersion - Inverse Model). Optimising atmospheric dispersion models in terms of volcanic ash transport predictions by observations is especially essential for the aviation industry and associated interests. Remote sensing satellite observations are instrumental for ash detection and monitoring. We choose volcanic ash column retrievals of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) because as infrared instrument on the geostationary satellite Meteosat Second Generation it delivers measurements with high temporal resolution during day and night. The retrieval method relies on the reverse absorption effect. In the framework of the national initiative ESKP (Earth System Knowledge Platform) and the European ACTRIS-2 (Aerosol, Clouds, and Trace gases Research InfraStructure) project, we developed new modules (forward and adjoint) within the EURAD-IM, which are able to process SEVIRI ash column data as observational input to the 4D-var system. The focus of the 4D-var analysis is on initial value optimisation of the volcanic ash clouds that were emitted during the explosive Eyjafjallajökull eruption in April 2010. This eruption caused high public interest because of air traffic closures and it was particularly well observed from many different observation systems all over Europe. Considering multiple observation periods simultaneously in one assimilation window generates a continuous trajectory in the phase space and ensures that past observations are considered within their uncertainties. Results are validated mainly by lidar (LIght Detection And Ranging) observations, both ground and satellite based.
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, Emerson
2018-01-01
One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.
A TIERED APPROACH TO PERFORMING UNCERTAINTY ANALYSIS IN CONDUCTING EXPOSURE ANALYSIS FOR CHEMICALS
The WHO/IPCS draft Guidance Document on Characterizing and Communicating Uncertainty in Exposure Assessment provides guidance on recommended strategies for conducting uncertainty analysis as part of human exposure analysis. Specifically, a tiered approach to uncertainty analysis ...
Soft pomerons and the forward LHC data
NASA Astrophysics Data System (ADS)
Broilo, M.; Luna, E. G. S.; Menon, M. J.
2018-06-01
Recent data from LHC13 by the TOTEM Collaboration on σtot and ρ have indicated disagreement with all the Pomeron model predictions by the COMPETE Collaboration (2002). On the other hand, as recently demonstrated by Martynov and Nicolescu (MN), the new σtot datum and the unexpected decrease in the ρ value are well described by the maximal Odderon dominance at the highest energies. Here, we discuss the applicability of Pomeron dominance through fits to the most complete set of forward data from pp and p bar p scattering. We consider an analytic parameterization for σtot (s) consisting of non-degenerated Regge trajectories for even and odd amplitudes (as in the MN analysis) and two Pomeron components associated with double and triple poles in the complex angular momentum plane. The ρ parameter is analytically determined by means of dispersion relations. We carry out fits to pp and p bar p data on σtot and ρ in the interval 5 GeV-13 TeV (as in the MN analysis). Two novel aspects of our analysis are: (1) the dataset comprises all the accelerator data below 7 TeV and we consider three independent ensembles by adding: either only the TOTEM data (as in the MN analysis), or only the ATLAS data, or both sets; (2) in the data reductions to each ensemble, uncertainty regions are evaluated through error propagation from the fit parameters, with 90% CL. We argument that, within the uncertainties, this analytic model corresponding to soft Pomeron dominance, does not seem to be excluded by the complete set of experimental data presently available.
QuASAR: quantitative allele-specific analysis of reads
Harvey, Chris T.; Moyerbrailean, Gregory A.; Davis, Gordon O.; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger
2015-01-01
Motivation: Expression quantitative trait loci (eQTL) studies have discovered thousands of genetic variants that regulate gene expression, enabling a better understanding of the functional role of non-coding sequences. However, eQTL studies are costly, requiring large sample sizes and genome-wide genotyping of each sample. In contrast, analysis of allele-specific expression (ASE) is becoming a popular approach to detect the effect of genetic variation on gene expression, even within a single individual. This is typically achieved by counting the number of RNA-seq reads matching each allele at heterozygous sites and testing the null hypothesis of a 1:1 allelic ratio. In principle, when genotype information is not readily available, it could be inferred from the RNA-seq reads directly. However, there are currently no existing methods that jointly infer genotypes and conduct ASE inference, while considering uncertainty in the genotype calls. Results: We present QuASAR, quantitative allele-specific analysis of reads, a novel statistical learning method for jointly detecting heterozygous genotypes and inferring ASE. The proposed ASE inference step takes into consideration the uncertainty in the genotype calls, while including parameters that model base-call errors in sequencing and allelic over-dispersion. We validated our method with experimental data for which high-quality genotypes are available. Results for an additional dataset with multiple replicates at different sequencing depths demonstrate that QuASAR is a powerful tool for ASE analysis when genotypes are not available. Availability and implementation: http://github.com/piquelab/QuASAR. Contact: fluca@wayne.edu or rpique@wayne.edu Supplementary information: Supplementary Material is available at Bioinformatics online. PMID:25480375
Aerodynamic Challenges for the Mars Science Laboratory Entry, Descent and Landing
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Dyakonov, Artem; Buning, Pieter; Scallion, William; Norman, John Van
2009-01-01
An overview of several important aerodynamics challenges new to the Mars Science Laboratory (MSL) entry vehicle are presented. The MSL entry capsule is a 70 degree sphere cone-based on the original Mars Viking entry capsule. Due to payload and landing accuracy requirements, MSL will be flying at the highest lift-to-drag ratio of any capsule sent to Mars (L/D = 0.24). The capsule will also be flying a guided entry, performing bank maneuvers, a first for Mars entry. The system's mechanical design and increased performance requirements require an expansion of the MSL flight envelope beyond those of historical missions. In certain areas, the experience gained by Viking and other recent Mars missions can no longer be claimed as heritage information. New analysis and testing is re1quired to ensure the safe flight of the MSL entry vehicle. The challenge topics include: hypersonic gas chemistry and laminar-versus-turbulent flow effects on trim angle, a general risk assessment of flying at greater angles-of-attack than Viking, quantifying the aerodynamic interactions induced by a new reaction control system and a risk assessment of recontact of a series of masses jettisoned prior to parachute deploy. An overview of the analysis and tests being conducted to understand and reduce risk in each of these areas is presented. The need for proper modeling and implementation of uncertainties for use in trajectory simulation has resulted in a revision of prior models and additional analysis for the MSL entry vehicle. The six degree-of-freedom uncertainty model and new analysis to quantify roll torque dispersions are presented.
Frequent long-distance plant colonization in the changing Arctic.
Alsos, Inger Greve; Eidesen, Pernille Bronken; Ehrich, Dorothee; Skrede, Inger; Westergaard, Kristine; Jacobsen, Gro Hilde; Landvik, Jon Y; Taberlet, Pierre; Brochmann, Christian
2007-06-15
The ability of species to track their ecological niche after climate change is a major source of uncertainty in predicting their future distribution. By analyzing DNA fingerprinting (amplified fragment-length polymorphism) of nine plant species, we show that long-distance colonization of a remote arctic archipelago, Svalbard, has occurred repeatedly and from several source regions. Propagules are likely carried by wind and drifting sea ice. The genetic effect of restricted colonization was strongly correlated with the temperature requirements of the species, indicating that establishment limits distribution more than dispersal. Thus, it may be appropriate to assume unlimited dispersal when predicting long-term range shifts in the Arctic.
1993-03-01
Naval Weapons Center (NWC) at China Lake, California. Sponsored by the U.S. Dept. of Energy and the Gas Research Institute, the trials consisted of...are about an order of magnitude greater. 3. Hanford Kr 8 5 The results from 13 dispersion trials conducted at the Atomic Energy Commission’s Hanford...1.000 HC5 200 15.44 0.0155 0.996 800 1.274 0.00127 1.003 H12 200 1583 2.489 0.636 800 87.85 0.0995 0.883 H13 200 430.5 0.5049 0.853 800 14.61 0.01526
NASA Astrophysics Data System (ADS)
Tiryaki, Erhan; Coşkun, Emre; Kocahan, Özlem; Özder, Serhat
2017-02-01
In this work, the Continuous Wavelet Transform (CWT) with Paul wavelet was improved as a tool for determination of refractive index dispersion of dielectric film by using the reflectance spectrum of the film. The reflectance spectrum was generated theoretically in the range of 0.8333 - 3.3333 μm wavenumber and it was analyzed with presented method. Obtained refractive index determined from various resolution of Paul wavelet were compared with the input values, and the importance of the tunable resolution with Paul wavelet was discussed briefly. The noise immunity and uncertainty of the method was also studied.
Lopez-Coto, Israel; Ghosh, Subhomoy; Prasad, Kuldeep; Whetstone, James
2017-09-01
The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k -means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.
NASA Astrophysics Data System (ADS)
Lopez-Coto, Israel; Ghosh, Subhomoy; Prasad, Kuldeep; Whetstone, James
2017-09-01
The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k-means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.
Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao
2007-01-01
A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.
Luo, Y.; Xia, J.; Xu, Y.; Zeng, C.; Liu, J.
2010-01-01
Love-wave propagation has been a topic of interest to crustal, earthquake, and engineering seismologists for many years because it is independent of Poisson's ratio and more sensitive to shear (S)-wave velocity changes and layer thickness changes than are Rayleigh waves. It is well known that Love-wave generation requires the existence of a low S-wave velocity layer in a multilayered earth model. In order to study numerically the propagation of Love waves in a layered earth model and dispersion characteristics for near-surface applications, we simulate high-frequency (>5 Hz) Love waves by the staggered-grid finite-difference (FD) method. The air-earth boundary (the shear stress above the free surface) is treated using the stress-imaging technique. We use a two-layer model to demonstrate the accuracy of the staggered-grid modeling scheme. We also simulate four-layer models including a low-velocity layer (LVL) or a high-velocity layer (HVL) to analyze dispersive energy characteristics for near-surface applications. Results demonstrate that: (1) the staggered-grid FD code and stress-imaging technique are suitable for treating the free-surface boundary conditions for Love-wave modeling, (2) Love-wave inversion should be treated with extra care when a LVL exists because of a lack of LVL information in dispersions aggravating uncertainties in the inversion procedure, and (3) energy of high modes in a low-frequency range is very weak, so that it is difficult to estimate the cutoff frequency accurately, and "mode-crossing" occurs between the second higher and third higher modes when a HVL exists. ?? 2010 Birkh??user / Springer Basel AG.
NASA Astrophysics Data System (ADS)
Chen, Bing; Stein, Ariel F.; Castell, Nuria; de la Rosa, Jesus D.; Sanchez de la Campa, Ana M.; Gonzalez-Castanedo, Yolanda; Draxler, Roland R.
2012-03-01
Arsenic is a toxic element for human health. Consequently, a mean annual target level for arsenic at 6 ng m-3 in PM10 was established by the European Directive 2004/107/CE to take effect January 2013. Cu-smelters can contribute to one-third of total emissions of arsenic in the atmosphere. Surface observations taken near a large Cu-smelter in the city of Huelva (Spain) show hourly arsenic concentrations in the range of 0-20 ng m-3. The arsenic peaks of 20 ng m-3 are higher than values normally observed in urban areas around Europe by a factor of 10. The Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model has been employed to predict arsenic emissions, transport, and dispersion from the Cu-smelter. The model utilized outputs from different meteorological models and variations in the model physics options to simulate the uncertainty in the dispersion of the arsenic plume. Modeling outputs from the physics ensemble for each meteorological model driving HYSPLIT show the same number of arsenic peaks. HYSPLIT coupled with the Weather Research and Forecasting (WRF-ARW) meteorological output predicted the right number of peaks for arsenic concentration at the observation site. The best results were obtained when the WRF simulation used both four-dimensional data assimilation and surface analysis nudging. The prediction was good in local sea breeze circulations or when the flow was dominated by the synoptic scale prevailing winds. However, the predicted peak was delayed when the transport and dispersion was under the influence of an Atlantic cyclone. The calculated concentration map suggests that the plume from the Cu-smelter can cause arsenic pollution events in the city of Huelva as well as other cities and tourist areas in southwestern Spain.
Dawson, Andria; Paciorek, Christopher J.; McLachlan, Jason S.; Goring, Simon; Williams, John W.; Jackson, Stephen T.
2016-01-01
Mitigation of climate change and adaptation to its effects relies partly on how effectively land-atmosphere interactions can be quantified. Quantifying composition of past forest ecosystems can help understand processes governing forest dynamics in a changing world. Fossil pollen data provide information about past forest composition, but rigorous interpretation requires development of pollen-vegetation models (PVMs) that account for interspecific differences in pollen production and dispersal. Widespread and intensified land-use over the 19th and 20th centuries may have altered pollen-vegetation relationships. Here we use STEPPS, a Bayesian hierarchical spatial PVM, to estimate key process parameters and associated uncertainties in the pollen-vegetation relationship. We apply alternate dispersal kernels, and calibrate STEPPS using a newly developed Euro-American settlement-era calibration data set constructed from Public Land Survey data and fossil pollen samples matched to the settlement-era using expert elicitation. Models based on the inverse power-law dispersal kernel outperformed those based on the Gaussian dispersal kernel, indicating that pollen dispersal kernels are fat tailed. Pine and birch have the highest pollen productivities. Pollen productivity and dispersal estimates are generally consistent with previous understanding from modern data sets, although source area estimates are larger. Tests of model predictions demonstrate the ability of STEPPS to predict regional compositional patterns.
NASA Astrophysics Data System (ADS)
Dawson, Andria; Paciorek, Christopher J.; McLachlan, Jason S.; Goring, Simon; Williams, John W.; Jackson, Stephen T.
2016-04-01
Mitigation of climate change and adaptation to its effects relies partly on how effectively land-atmosphere interactions can be quantified. Quantifying composition of past forest ecosystems can help understand processes governing forest dynamics in a changing world. Fossil pollen data provide information about past forest composition, but rigorous interpretation requires development of pollen-vegetation models (PVMs) that account for interspecific differences in pollen production and dispersal. Widespread and intensified land-use over the 19th and 20th centuries may have altered pollen-vegetation relationships. Here we use STEPPS, a Bayesian hierarchical spatial PVM, to estimate key process parameters and associated uncertainties in the pollen-vegetation relationship. We apply alternate dispersal kernels, and calibrate STEPPS using a newly developed Euro-American settlement-era calibration data set constructed from Public Land Survey data and fossil pollen samples matched to the settlement-era using expert elicitation. Models based on the inverse power-law dispersal kernel outperformed those based on the Gaussian dispersal kernel, indicating that pollen dispersal kernels are fat tailed. Pine and birch have the highest pollen productivities. Pollen productivity and dispersal estimates are generally consistent with previous understanding from modern data sets, although source area estimates are larger. Tests of model predictions demonstrate the ability of STEPPS to predict regional compositional patterns.
In-situ observations of Eyjafjallajökull ash particles by hot-air balloon
NASA Astrophysics Data System (ADS)
Petäjä, T.; Laakso, L.; Grönholm, T.; Launiainen, S.; Evele-Peltoniemi, I.; Virkkula, A.; Leskinen, A.; Backman, J.; Manninen, H. E.; Sipilä, M.; Haapanala, S.; Hämeri, K.; Vanhala, E.; Tuomi, T.; Paatero, J.; Aurela, M.; Hakola, H.; Makkonen, U.; Hellén, H.; Hillamo, R.; Vira, J.; Prank, M.; Sofiev, M.; Siitari-Kauppi, M.; Laaksonen, A.; lehtinen, K. E. J.; Kulmala, M.; Viisanen, Y.; Kerminen, V.-M.
2012-03-01
The volcanic ash cloud from Eyjafjallajökull volcanic eruption seriously distracted aviation in Europe. Due to the flight ban, there were only few in-situ measurements of the properties and dispersion of the ash cloud. In this study we show in-situ observations onboard a hot air balloon conducted in Central Finland together with regional dispersion modelling with SILAM-model during the eruption. The modeled and measured mass concentrations were in a qualitative agreement but the exact elevation of the layer was slightly distorted. Some of this discrepancy can be attributed to the uncertainty in the initial emission height and strength. The observed maximum mass concentration varied between 12 and 18 μg m -3 assuming a density of 2 g m -3, whereas the gravimetric analysis of the integrated column showed a maximum of 45 μg m -3 during the first two descents through the ash plume. Ion chromatography data indicated that a large fraction of the mass was insoluble to water, which is in qualitative agreement with single particle X-ray analysis. A majority of the super-micron particles contained Si, Al, Fe, K, Na, Ca, Ti, S, Zn and Cr, which are indicative for basalt-type rock material. The number concentration profiles indicated that there was secondary production of particles possibly from volcano-emitted sulfur dioxide oxidized to sulfuric acid during the transport.
User's Manual for RESRAD-OFFSITE Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Gnanapragasam, E.; Biwer, B. M.
2007-09-05
The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code, which has been widely used for calculating doses and risks from exposure to radioactively contaminated soils. The development of RESRAD-OFFSITE started more than 10 years ago, but new models and methodologies have been developed, tested, and incorporated since then. Some of the new models have been benchmarked against other independently developed (international) models. The databases used have also expanded to include all the radionuclides (more than 830) contained in the International Commission on Radiological Protection (ICRP) 38 database. This manual provides detailed information on the design and application ofmore » the RESRAD-OFFSITE code. It describes in detail the new models used in the code, such as the three-dimensional dispersion groundwater flow and radionuclide transport model, the Gaussian plume model for atmospheric dispersion, and the deposition model used to estimate the accumulation of radionuclides in offsite locations and in foods. Potential exposure pathways and exposure scenarios that can be modeled by the RESRAD-OFFSITE code are also discussed. A user's guide is included in Appendix A of this manual. The default parameter values and parameter distributions are presented in Appendix B, along with a discussion on the statistical distributions for probabilistic analysis. A detailed discussion on how to reduce run time, especially when conducting probabilistic (uncertainty) analysis, is presented in Appendix C of this manual.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
NASA Astrophysics Data System (ADS)
Flores, A. N.; Entekhabi, D.; Bras, R. L.
2007-12-01
Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.
Manna, F; Pradel, R; Choquet, R; Fréville, H; Cheptou, P-O
2017-10-01
In plants, the presence of a seed bank challenges the application of classical metapopulation models to aboveground presence surveys; ignoring seed bank leads to overestimated extinction and colonization rates. In this article, we explore the possibility to detect seed bank using hidden Markov models in the analysis of aboveground patch occupancy surveys of an annual plant with limited dispersal. Patch occupancy data were generated by simulation under two metapopulation sizes (N = 200 and N = 1,000 patches) and different metapopulation scenarios, each scenario being a combination of the presence/absence of a 1-yr seed bank and the presence/absence of limited dispersal in a circular 1-dimension configuration of patches. In addition, because local conditions often vary among patches in natural metapopulations, we simulated patch occupancy data with heterogeneous germination rate and patch disturbance. Seed bank is not observable from aboveground patch occupancy surveys, hence hidden Markov models were designed to account for uncertainty in patch occupancy. We explored their ability to retrieve the correct scenario. For 10 yr surveys and metapopulation sizes of N = 200 or 1,000 patches, the correct metapopulation scenario was detected at a rate close to 100%, whatever the underlying scenario considered. For smaller, more realistic, survey duration, the length for a reliable detection of the correct scenario depends on the metapopulation size: 3 yr for N = 1,000 and 6 yr for N = 200 are enough. Our method remained powerful to disentangle seed bank from dispersal in the presence of patch heterogeneity affecting either seed germination or patch extinction. Our work shows that seed bank and limited dispersal generate different signatures on aboveground patch occupancy surveys. Therefore, our method provides a powerful tool to infer metapopulation dynamics in a wide range of species with an undetectable life form. © 2017 by the Ecological Society of America.
Changing spatial patterns of stand-replacing fire in California conifer forests
Jens T. Stevens; Brandon M. Collins; Jay D. Miller; Malcolm P. North; Scott L. Stephens
2017-01-01
Stand-replacing fire has profound ecological impacts in conifer forests, yet there is continued uncertainty over how best to describe the scale of stand-replacing effects within individual fires, and how these effects are changing over time. In forests where regeneration following stand-replacing fire depends on seed dispersal from surviving trees, the size and shape...
Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser
2018-01-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...
Heikkinen, Risto K; Bocedi, Greta; Kuussaari, Mikko; Heliölä, Janne; Leikola, Niko; Pöyry, Juha; Travis, Justin M J
2014-01-01
Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina) and one generalist (Issoria lathonia). Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity), with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning.
VizieR Online Data Catalog: M33 molecular clouds and young stellar clusters (Corbelli+, 2017)
NASA Astrophysics Data System (ADS)
Corbelli, E.; Braine, J.; Bandiera, R.; Brouillet, N.; Combes, F.; Druard, C.; Gratier, P.; Mata, J.; Schuster, K.; Xilouris, M.; Palla, F.
2017-04-01
Table 5 : Physical parameters for the 566 molecular clouds identified through the IRAM 30m CO J=2-1 survey of the star forming disk of M33. For each cloud the cloud type and the following properties are listed: celestial coordinates, galactocentric radius, cloud deconvolved effective radius and its uncertainty, CO(2-1) line velocity dispersion from CPROPS and its uncertainty, line velocity dispersion from a Gaussian fit, CO luminous mass and its uncertainty, and virial mass from a Gaussian fit. In the last column the identification number of the young stellar cluster candidates associated with the molecular cloud are listed. Notes: We identify up to four young stellar cluster candidates (YSCCs) associated with each molecular cloud and we list them according to the identification number of Sharma et al. (2011, Cat. J/A+A/545/A96) given also in Table 6. Table 6 : Physical parameters for the 630 young stellar cluster candidates identified via their mid-infrared emission in the star forming disk of M33. For each YSCC we list the type of source, the identified number of the molecular clouds associated with it (if any) and the corresponding cloud classes. In addition, for each YSCC we give the celestial coordinates, the bolometric, total infrared, FUV and Halpha luminosities, the estimated mass and age, the visual extinction, the galactocentric radius, the source size, and its flux at 24μm. (2 data files).
NASA Astrophysics Data System (ADS)
Zhao, Yi; Bi, Xiao-Jun; Yin, Peng-Fei; Zhang, Xinmin
2018-03-01
Searching for γ rays from dwarf spheroidal galaxies (dSphs) is a promising approach to detect dark matter (DM) due to the high DM densities and low baryon components in dSphs. The Fermi-LAT observations from dSphs have set stringent constraints on the velocity independent annihilation cross section. However, the constraints from dSphs may change in velocity dependent annihilation scenarios because of the different velocity dispersions in galaxies. In this work, we study how to set constraints on the velocity dependent annihilation cross section from the combined Fermi-LAT observations of dSphs with the kinematic data. In order to calculate the γ ray flux from the dSph, the correlation between the DM density profile and velocity dispersion at each position should be taken into account. We study such correlation and the relevant uncertainty from kinematic observations by performing a Jeans analysis. Using the observational results of three ultrafaint dSphs with large J-factors, including Willman 1, Reticulum II, and Triangulum II, we set constraints on the p-wave annihilation cross section in the Galaxy as an example.
Development and testing of meteorology and air dispersion models for Mexico City
NASA Astrophysics Data System (ADS)
Williams, M. D.; Brown, M. J.; Cruz, X.; Sosa, G.; Streit, G.
Los Alamos National Laboratory and Instituto Mexicano del Petróleo are completing a joint study of options for improving air quality in Mexico City. We have modified a three-dimensional, prognostic, higher-order turbulence model for atmospheric circulation (HOTMAC) and a Monte Carlo dispersion and transport model (RAPTAD) to treat domains that include an urbanized area. We used the meteorological model to drive models which describe the photochemistry and air transport and dispersion. The photochemistry modeling is described in a separate paper. We tested the model against routine measurements and those of a major field program. During the field program, measurements included: (1) lidar measurements of aerosol transport and dispersion, (2) aircraft measurements of winds, turbulence, and chemical species aloft, (3) aircraft measurements of skin temperatures, and (4) Tethersonde measurements of winds and ozone. We modified the meteorological model to include provisions for time-varying synoptic-scale winds, adjustments for local wind effects, and detailed surface-coverage descriptions. We developed a new method to define mixing-layer heights based on model outputs. The meteorology and dispersion models were able to provide reasonable representations of the measurements and to define the sources of some of the major uncertainties in the model-measurement comparisons.
NASA Astrophysics Data System (ADS)
Dong, J. T.; Ji, F.; Xia, H. J.; Liu, Z. J.; Zhang, T. D.; Yang, L.
2018-01-01
An angle-resolved spectral Fabry-Pérot interferometer is reported for fast and accurate measurement of the refractive index dispersion of optical materials with parallel plate shape. The light sheet from the wavelength tunable laser is incident on the parallel plate with converging angles. The transmitted interference light for each angle is dispersed and captured by a 2D sensor, in which the rows and the columns are used to simultaneously record the intensities as a function of wavelength and incident angle, respectively. The interferogram, named angle-resolved spectral intensity distribution, is analyzed by fitting the phase information instead of finding the fringe peak locations that present periodic ambiguity. The refractive index dispersion and the physical thickness can be then retrieved from a single-shot interferogram within 18 s. Experimental results of an optical substrate standard indicate that the accuracy of the refractive index dispersion is less than 2.5 × 10-5 and the relative uncertainty of the thickness is 6 × 10-5 mm (3σ) due to the high stability and the single-shot measurement of the proposed system.
Uncertainty Budget Analysis for Dimensional Inspection Processes (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdez, Lucas M.
2012-07-26
This paper is intended to provide guidance and describe how to prepare an uncertainty analysis of a dimensional inspection process through the utilization of an uncertainty budget analysis. The uncertainty analysis is stated in the same methodology as that of the ISO GUM standard for calibration and testing. There is a specific distinction between how Type A and Type B uncertainty analysis is used in a general and specific process. All theory and applications are utilized to represent both a generalized approach to estimating measurement uncertainty and how to report and present these estimations for dimensional measurements in a dimensionalmore » inspection process. The analysis of this uncertainty budget shows that a well-controlled dimensional inspection process produces a conservative process uncertainty, which can be attributed to the necessary assumptions in place for best possible results.« less
The Nanograv Nine-Year Data Set: Measurement and Analysis of Variations in Dispersion Measures
NASA Technical Reports Server (NTRS)
Jones, M. L.; McLaughlin, M. A.; Lam, M. T.; Cordes, J. M.; Levin, L.; Chatterjee, S.; Arzoumanian, Z.; Crowter, K.; Demorest, P. B.; Dolch, T.;
2017-01-01
We analyze dispersion measure(DM) variations of 37 millisecond pulsars in the nine-year North American Nanohertz Observatory for Gravitational Waves (NANOGrav) data release and constrain the sources of these variations. DM variations can result from a changing distance between Earth and the pulsar, inhomogeneities in the interstellar medium, and solar effects. Variations are significant for nearly all pulsars, with characteristic timescales comparable to or even shorter than the average spacing between observations. Five pulsars have periodic annual variations, 14 pulsars have monotonically increasing or decreasing trends, and 14 pulsars show both effects. Of the four pulsars with linear trends that have line-of-sight velocity measurements, three are consistent with a changing distance and require an overdensity of free electrons local to the pulsar. Several pulsars show correlations between DM excesses and lines of sight that pass close to the Sun. Mapping of the DM variations as a function of the pulsar trajectory can identify localized interstellar medium features and, in one case, an upper limit to the size of the dispersing region of 4 au. Four pulsars show roughly Kolmogorov structure functions (SFs), and another four show SFs less steep than Kolmogorov. One pulsar has too large an uncertainty to allow comparisons. We discuss explanations for apparent departures from a Kolmogorov-like spectrum, and we show that the presence of other trends and localized features or gradients in the interstellar medium is the most likely cause.
A Kinematic Study of the Andromeda Dwarf Spheroidal System
NASA Astrophysics Data System (ADS)
Collins, Michelle L. M.; Chapman, Scott C.; Rich, R. Michael; Ibata, Rodrigo A.; Martin, Nicolas F.; Irwin, Michael J.; Bate, Nicholas F.; Lewis, Geraint F.; Peñarrubia, Jorge; Arimoto, Nobuo; Casey, Caitlin M.; Ferguson, Annette M. N.; Koch, Andreas; McConnachie, Alan W.; Tanvir, Nial
2013-05-01
We present a homogeneous kinematic analysis of red giant branch stars within 18 of the 28 Andromeda dwarf spheroidal (dSph) galaxies, obtained using the Keck I/LRIS and Keck II/DEIMOS spectrographs. Based on their g - i colors (taken with the CFHT/MegaCam imager), physical positions on the sky, and radial velocities, we assign probabilities of dSph membership to each observed star. Using this information, the velocity dispersions, central masses, and central densities of the dark matter halos are calculated for these objects, and compared with the properties of the Milky Way dSph population. We also measure the average metallicity ([Fe/H]) from the co-added spectra of member stars for each M31 dSph and find that they are consistent with the trend of decreasing [Fe/H] with luminosity observed in the Milky Way population. We find that three of our studied M31 dSphs appear as significant outliers in terms of their central velocity dispersion, And XIX, XXI, and XXV, all of which have large half-light radii (gsim 700 pc) and low velocity dispersions (σ v < 5 km s-1). In addition, And XXV has a mass-to-light ratio within its half-light radius of just [M/L]_half=10.3^{+7.0}_{-6.7}, making it consistent with a simple stellar system with no appreciable dark matter component within its 1σ uncertainties. We suggest that the structure of the dark matter halos of these outliers have been significantly altered by tides.
NASA Astrophysics Data System (ADS)
Li, Ming-Hua; Zhu, Weishan; Zhao, Dong
2018-05-01
The gas is the dominant component of baryonic matter in most galaxy groups and clusters. The spatial offsets of gas centre from the halo centre could be an indicator of the dynamical state of cluster. Knowledge of such offsets is important for estimate the uncertainties when using clusters as cosmological probes. In this paper, we study the centre offsets roff between the gas and that of all the matter within halo systems in ΛCDM cosmological hydrodynamic simulations. We focus on two kinds of centre offsets: one is the three-dimensional PB offsets between the gravitational potential minimum of the entire halo and the barycentre of the ICM, and the other is the two-dimensional PX offsets between the potential minimum of the halo and the iterative centroid of the projected synthetic X-ray emission of the halo. Haloes at higher redshifts tend to have larger values of rescaled offsets roff/r200 and larger gas velocity dispersion σ v^gas/σ _{200}. For both types of offsets, we find that the correlation between the rescaled centre offsets roff/r200 and the rescaled 3D gas velocity dispersion, σ _v^gas/σ _{200} can be approximately described by a quadratic function as r_{off}/r_{200} ∝ (σ v^gas/σ _{200} - k_2)2. A Bayesian analysis with MCMC method is employed to estimate the model parameters. Dependence of the correlation relation on redshifts and the gas mass fraction are also investigated.
Nearshore dynamics of artificial sand and oil agglomerates
Dalyander, P. Soupy; Plant, Nathaniel G.; Long, Joseph W.; McLaughlin, Molly R.
2015-01-01
Weathered oil can mix with sediment to form heavier-than-water sand and oil agglomerates (SOAs) that can cause beach re-oiling for years after a spill. Few studies have focused on the physical dynamics of SOAs. In this study, artificial SOAs (aSOAs) were created and deployed in the nearshore, and shear stress-based mobility formulations were assessed to predict SOA response. Prediction sensitivity to uncertainty in hydrodynamic conditions and shear stress parameterizations were explored. Critical stress estimates accounting for large particle exposure in a mixed bed gave the best predictions of mobility under shoaling and breaking waves. In the surf zone, the 10-cm aSOA was immobile and began to bury in the seafloor while smaller size classes dispersed alongshore. aSOAs up to 5 cm in diameter were frequently mobilized in the swash zone. The uncertainty in predicting aSOA dynamics reflects a broader uncertainty in applying mobility and transport formulations to cm-sized particles.
Representation of analysis results involving aleatory and epistemic uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for themore » representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.« less
Measurement uncertainty analysis techniques applied to PV performance measurements
NASA Astrophysics Data System (ADS)
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Wang, Yi-Ya; Zhan, Xiu-Chun
2014-04-01
Evaluating uncertainty of analytical results with 165 geological samples by polarized dispersive X-ray fluorescence spectrometry (P-EDXRF) has been reported according to the internationally accepted guidelines. One hundred sixty five pressed pellets of similar matrix geological samples with reliable values were analyzed by P-EDXRF. These samples were divided into several different concentration sections in the concentration ranges of every component. The relative uncertainties caused by precision and accuracy of 27 components were evaluated respectively. For one element in one concentration, the relative uncertainty caused by precision can be calculated according to the average value of relative standard deviation with different concentration level in one concentration section, n = 6 stands for the 6 results of one concentration level. The relative uncertainty caused by accuracy in one concentration section can be evaluated by the relative standard deviation of relative deviation with different concentration level in one concentration section. According to the error propagation theory, combining the precision uncertainty and the accuracy uncertainty into a global uncertainty, this global uncertainty acted as method uncertainty. This model of evaluating uncertainty can solve a series of difficult questions in the process of evaluating uncertainty, such as uncertainties caused by complex matrix of geological samples, calibration procedure, standard samples, unknown samples, matrix correction, overlap correction, sample preparation, instrument condition and mathematics model. The uncertainty of analytical results in this method can act as the uncertainty of the results of the similar matrix unknown sample in one concentration section. This evaluation model is a basic statistical method owning the practical application value, which can provide a strong base for the building of model of the following uncertainty evaluation function. However, this model used a lot of samples which cannot simply be applied to other types of samples with different matrix samples. The number of samples is too large to adapt to other type's samples. We will strive for using this study as a basis to establish a reasonable basis of mathematical statistics function mode to be applied to different types of samples.
Schools Get Katrina Aid, Uncertainty: $645 Million May Not Cover Costs of Displaced Students
ERIC Educational Resources Information Center
Klein, Alyson
2006-01-01
As federal aid for students uprooted by Hurricanes Katrina and Rita begins making its way to cash-strapped school districts, many educators are worried that the money Congress allocated will fall well short of their costs. Since the hurricanes damaged hundreds of schools in the Gulf Coast region and initially dispersed nearly 375,000 students,…
Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example
2015-02-01
Hogg, “Phase transitions and the search problem by, artificial intellience ”, (an Elsevier journal) volume 81, published in 1996, Pages 1- 15. [39] R...dispersion of the mean mutual information of the estimate is low enough to support the use of the linear approximation. M ut ua l In M uf or m at io n
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Water isotopic ratios from a continuously melted ice core sample
NASA Astrophysics Data System (ADS)
Gkinis, V.; Popp, T. J.; Blunier, T.; Bigler, M.; Schüpbach, S.; Johnsen, S. J.
2011-06-01
A new technique for on-line high resolution isotopic analysis of liquid water, tailored for ice core studies is presented. We build an interface between an Infra Red Cavity Ring Down Spectrometer (IR-CRDS) and a Continuous Flow Analysis (CFA) system. The system offers the possibility to perform simultaneuous water isotopic analysis of δ18O and δD on a continuous stream of liquid water as generated from a continuously melted ice rod. Injection of sub μl amounts of liquid water is achieved by pumping sample through a fused silica capillary and instantaneously vaporizing it with 100 % efficiency in a home made oven at a temperature of 170 °C. A calibration procedure allows for proper reporting of the data on the VSMOW scale. We apply the necessary corrections based on the assessed performance of the system regarding instrumental drifts and dependance on humidity levels. The melt rates are monitored in order to assign a depth scale to the measured isotopic profiles. Application of spectral methods yields the combined uncertainty of the system at below 0.1 ‰ and 0.5 ‰ for δ18O and δD, respectively. This performance is comparable to that achieved with mass spectrometry. Dispersion of the sample in the transfer lines limits the resolution of the technique. In this work we investigate and assess these dispersion effects. By using an optimal filtering method we show how the measured profiles can be corrected for the smoothing effects resulting from the sample dispersion. Considering the significant advantages the technique offers, i.e. simultaneuous measurement of δ18O and δD, potentially in combination with chemical components that are traditionally measured on CFA systems, notable reduction on analysis time and power consumption, we consider it as an alternative to traditional isotope ratio mass spectrometry with the possibility to be deployed for field ice core studies. We present data acquired in the framework of the NEEM deep ice core drilling project in Greenland, during the 2010 field season.
Water isotopic ratios from a continuously melted ice core sample
NASA Astrophysics Data System (ADS)
Gkinis, V.; Popp, T. J.; Blunier, T.; Bigler, M.; Schüpbach, S.; Kettner, E.; Johnsen, S. J.
2011-11-01
A new technique for on-line high resolution isotopic analysis of liquid water, tailored for ice core studies is presented. We built an interface between a Wavelength Scanned Cavity Ring Down Spectrometer (WS-CRDS) purchased from Picarro Inc. and a Continuous Flow Analysis (CFA) system. The system offers the possibility to perform simultaneuous water isotopic analysis of δ18O and δD on a continuous stream of liquid water as generated from a continuously melted ice rod. Injection of sub μl amounts of liquid water is achieved by pumping sample through a fused silica capillary and instantaneously vaporizing it with 100% efficiency in a~home made oven at a temperature of 170 °C. A calibration procedure allows for proper reporting of the data on the VSMOW-SLAP scale. We apply the necessary corrections based on the assessed performance of the system regarding instrumental drifts and dependance on the water concentration in the optical cavity. The melt rates are monitored in order to assign a depth scale to the measured isotopic profiles. Application of spectral methods yields the combined uncertainty of the system at below 0.1‰ and 0.5‰ for δ18O and δD, respectively. This performance is comparable to that achieved with mass spectrometry. Dispersion of the sample in the transfer lines limits the temporal resolution of the technique. In this work we investigate and assess these dispersion effects. By using an optimal filtering method we show how the measured profiles can be corrected for the smoothing effects resulting from the sample dispersion. Considering the significant advantages the technique offers, i.e. simultaneuous measurement of δ18O and δD, potentially in combination with chemical components that are traditionally measured on CFA systems, notable reduction on analysis time and power consumption, we consider it as an alternative to traditional isotope ratio mass spectrometry with the possibility to be deployed for field ice core studies. We present data acquired in the field during the 2010 season as part of the NEEM deep ice core drilling project in North Greenland.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
Impact of uncertainty on modeling and testing
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Brown, Kendall K.
1995-01-01
A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.
Melting and Vaporization of the 1223 Phase in the System (Tl-Pb-Ba-Sr-Ca-Cu-O)
Cook, L. P.; Wong-Ng, W.; Paranthaman, P.
1996-01-01
The melting and vaporization of the 1223 [(Tl,Pb):(Ba,Sr):Ca:Cu] oxide phase in the system (Tl-Pb-Ba-Sr-Ca-Cu-O) have been investigated using a combination of dynamic methods (differential thermal analysis, thermogravimetry, effusion) and post-quenching characterization techniques (powder x-ray diffraction, scanning electron microscopy, energy dispersive x-ray spectrometry). Vaporization rates, thermal events, and melt compositions were followed as a function of thallia loss from a 1223 stoichiometry. Melting and vaporization equilibria of the 1223 phase are complex, with as many as seven phases participating simultaneously. At a total pressure of 0.1 MPa the 1223 phase was found to melt completely at (980 ± 5) °C in oxygen, at a thallia partial pressure (pTl2O) of (4.6 ± 0.5) kPa, where the quoted uncertainties are standard uncertainties, i.e., 1 estimated standard deviation. The melting reaction involves five other solids and a liquid, nominally as follows: 1223→1212+(Ca,Sr)2CuO3+(Sr,Ca)CuO2+BaPbO3+(Ca,Sr)O+Liquid Stoichiometries of the participating phases have been determined from microchemical analysis, and substantial elemental substitution on the 1212 and 1223 crystallographic sites is indicated. The 1223 phase occurs in equilibrium with liquids from its melting point down to at least 935 °C. The composition of the lowest melting liquid detected for the bulk compositions of this study has been measured using microchemical analysis. Applications to the processing of superconducting wires and tapes are discussed. PMID:27805086
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
NASA Astrophysics Data System (ADS)
Diallo, M. S.; Holschneider, M.; Kulesh, M.; Scherbaum, F.; Ohrnberger, M.; Lück, E.
2004-05-01
This contribution is concerned with the estimate of attenuation and dispersion characteristics of surface waves observed on a shallow seismic record. The analysis is based on a initial parameterization of the phase and attenuation functions which are then estimated by minimizing a properly defined merit function. To minimize the effect of random noise on the estimates of dispersion and attenuation we use cross-correlations (in Fourier domain) of preselected traces from some region of interest along the survey line. These cross-correlations are then expressed in terms of the parameterized attenuation and phase functions and the auto-correlation of the so-called source trace or reference trace. Cross-corelation that enter the optimization are selected so as to provide an average estimate of both the attenuation function and the phase (group) velocity of the area under investigation. The advantage of the method over the standard two stations method using Fourier technique is that uncertainties related to the phase unwrapping and the estimate of the number of 2π cycle skip in the phase phase are eliminated. However when mutliple modes arrival are observed, its become merely impossible to obtain reliable estimate the dipsersion curves for the different modes using optimization method alone. To circumvent this limitations we using the presented approach in conjunction with the wavelet propagation operator (Kulesh et al., 2003) which allows the application of band pass filtering in (ω -t) domain, to select a particular mode for the minimization. Also by expressing the cost function in the wavelet domain the optimization can be performed either with respect to the phase, the modulus of the transform or a combination of both. This flexibility in the design of the cost function provides an additional mean of constraining the optimization results. Results from the application of this dispersion and attenuation analysis method are shown for both synthetic and real 2D shallow seismic data sets. M. Kulesh, M. Holschneider, M. S. Diallo, Q. Xie and F. Scherbaum, Modeling of Wave Dispersion Using Wavelet Transfrom (Submitted to Pure and Applied Geophysics).
EDXRF as an alternative method for multielement analysis of tropical soils and sediments.
Fernández, Zahily Herrero; Dos Santos Júnior, José Araújo; Dos Santos Amaral, Romilton; Alvarez, Juan Reinaldo Estevez; da Silva, Edvane Borges; De França, Elvis Joacir; Menezes, Rômulo Simões Cezar; de Farias, Emerson Emiliano Gualberto; do Nascimento Santos, Josineide Marques
2017-08-10
The quality assessment of tropical soils and sediments is still under discussion, with efforts being made on the part of governmental agencies to establish reference values. Energy dispersive X-ray fluorescence (EDXRF) is a potential analytical technique for quantifying diverse chemical elements in geological material without chemical treatment, primarily when it is performed at an appropriate metrological level. In this work, analytical curves were obtained by means of the analysis of geological reference materials (RMs), which allowed for the researchers to draw a comparison among the sources of analytical uncertainty. After having determined the quality assurance of the analytical procedure, the EDXRF method was applied to determine chemical elements in soils from the state of Pernambuco, Brazil. The regression coefficients of the analytical curves used to determine Al, Ca, Fe, K, Mg, Mn, Ni, Pb, Si, Sr, Ti, and Zn were higher than 0.99. The quality of the analytical procedure was demonstrated at a 95% confidence level, in which the estimated analytical uncertainties agreed with those from the RM's certificates of analysis. The analysis of diverse geological samples from Pernambuco indicated higher concentrations of Ni and Zn in sugarcane, with maximum values of 41 mg kg - 1 and 118 mg kg - 1 , respectively, and agricultural areas (41 mg kg - 1 and 127 mg kg - 1 , respectively). The trace element Sr was mainly enriched in urban soils with values of 400 mg kg - 1 . According to the results, the EDXRF method was successfully implemented, providing some chemical tracers for the quality assessment of tropical soils and sediments.
QuASAR: quantitative allele-specific analysis of reads.
Harvey, Chris T; Moyerbrailean, Gregory A; Davis, Gordon O; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger
2015-04-15
Expression quantitative trait loci (eQTL) studies have discovered thousands of genetic variants that regulate gene expression, enabling a better understanding of the functional role of non-coding sequences. However, eQTL studies are costly, requiring large sample sizes and genome-wide genotyping of each sample. In contrast, analysis of allele-specific expression (ASE) is becoming a popular approach to detect the effect of genetic variation on gene expression, even within a single individual. This is typically achieved by counting the number of RNA-seq reads matching each allele at heterozygous sites and testing the null hypothesis of a 1:1 allelic ratio. In principle, when genotype information is not readily available, it could be inferred from the RNA-seq reads directly. However, there are currently no existing methods that jointly infer genotypes and conduct ASE inference, while considering uncertainty in the genotype calls. We present QuASAR, quantitative allele-specific analysis of reads, a novel statistical learning method for jointly detecting heterozygous genotypes and inferring ASE. The proposed ASE inference step takes into consideration the uncertainty in the genotype calls, while including parameters that model base-call errors in sequencing and allelic over-dispersion. We validated our method with experimental data for which high-quality genotypes are available. Results for an additional dataset with multiple replicates at different sequencing depths demonstrate that QuASAR is a powerful tool for ASE analysis when genotypes are not available. http://github.com/piquelab/QuASAR. fluca@wayne.edu or rpique@wayne.edu Supplementary Material is available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Modeling of the dispersion of depleted uranium aerosol.
Mitsakou, C; Eleftheriadis, K; Housiadas, C; Lazaridis, M
2003-04-01
Depleted uranium is a low-cost radioactive material that, in addition to other applications, is used by the military in kinetic energy weapons against armored vehicles. During the Gulf and Balkan conflicts concern has been raised about the potential health hazards arising from the toxic and radioactive material released. The aerosol produced during impact and combustion of depleted uranium munitions can potentially contaminate wide areas around the impact sites or can be inhaled by civilians and military personnel. Attempts to estimate the extent and magnitude of the dispersion were until now performed by complex modeling tools employing unclear assumptions and input parameters of high uncertainty. An analytical puff model accommodating diffusion with simultaneous deposition is developed, which can provide a reasonable estimation of the dispersion of the released depleted uranium aerosol. Furthermore, the period of the exposure for a given point downwind from the release can be estimated (as opposed to when using a plume model). The main result is that the depleted uranium mass is deposited very close to the release point. The deposition flux at a couple of kilometers from the release point is more than one order of magnitude lower than the one a few meters near the release point. The effects due to uncertainties in the key input variables are addressed. The most influential parameters are found to be atmospheric stability, height of release, and wind speed, whereas aerosol size distribution is less significant. The output from the analytical model developed was tested against the numerical model RPM-AERO. Results display satisfactory agreement between the two models.
NASA Astrophysics Data System (ADS)
Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.
2016-12-01
The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Effects of climate change and seed dispersal on airborne ragweed pollen loads in Europe
NASA Astrophysics Data System (ADS)
Hamaoui-Laguel, Lynda; Vautard, Robert; Liu, Li; Solmon, Fabien; Viovy, Nicolas; Khvorostyanov, Dmitry; Essl, Franz; Chuine, Isabelle; Colette, Augustin; Semenov, Mikhail A.; Schaffhauser, Alice; Storkey, Jonathan; Thibaudon, Michel; Epstein, Michelle M.
2015-08-01
Common ragweed (Ambrosia artemisiifolia) is an invasive alien species in Europe producing pollen that causes severe allergic disease in susceptible individuals. Ragweed plants could further invade European land with climate and land-use changes. However, airborne pollen evolution depends not only on plant invasion, but also on pollen production, release and atmospheric dispersion changes. To predict the effect of climate and land-use changes on airborne pollen concentrations, we used two comprehensive modelling frameworks accounting for all these factors under high-end and moderate climate and land-use change scenarios. We estimate that by 2050 airborne ragweed pollen concentrations will be about 4 times higher than they are now, with a range of uncertainty from 2 to 12 largely depending on the seed dispersal rate assumptions. About a third of the airborne pollen increase is due to on-going seed dispersal, irrespective of climate change. The remaining two-thirds are related to climate and land-use changes that will extend ragweed habitat suitability in northern and eastern Europe and increase pollen production in established ragweed areas owing to increasing CO2. Therefore, climate change and ragweed seed dispersal in current and future suitable areas will increase airborne pollen concentrations, which may consequently heighten the incidence and prevalence of ragweed allergy.
Roughness, resistance, and dispersion: Relationships in small streams
NASA Astrophysics Data System (ADS)
Noss, Christian; Lorke, Andreas
2016-04-01
Although relationships between roughness, flow, and transport processes in rivers and streams have been investigated for several decades, the prediction of flow resistance and longitudinal dispersion in small streams is still challenging. Major uncertainties in existing approaches for quantifying flow resistance and longitudinal dispersion at the reach scale arise from limitations in the characterization of riverbed roughness. In this study, we characterized the riverbed roughness in small moderate-gradient streams (0.1-0.5% bed slope) and investigated its effects on flow resistance and dispersion. We analyzed high-resolution transect-based measurements of stream depth and width, which resolved the complete roughness spectrum with scales ranging from the micro to the reach scale. Independently measured flow resistance and dispersion coefficients were mainly affected by roughness at spatial scales between the median grain size and the stream width, i.e., by roughness between the micro- and the mesoscale. We also compared our flow resistance measurements with calculations using various flow resistance equations. Flow resistance in our study streams was well approximated by the equations that were developed for high gradient streams (>1%) and it was overestimated by approaches developed for sand-bed streams with a smooth riverbed or ripple bed. This article was corrected on 10 MAY 2016. See the end of the full text for details.
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
NASA Astrophysics Data System (ADS)
Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel
2017-04-01
Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.
Estimation and uncertainty analysis of dose response in an inter-laboratory experiment
NASA Astrophysics Data System (ADS)
Toman, Blaza; Rösslein, Matthias; Elliott, John T.; Petersen, Elijah J.
2016-02-01
An inter-laboratory experiment for the evaluation of toxic effects of NH2-polystyrene nanoparticles on living human cancer cells was performed with five participating laboratories. Previously published results from nanocytoxicity assays are often contradictory, mostly due to challenges related to producing a reliable cytotoxicity assay protocol for use with nanomaterials. Specific challenges include reproducibility preparing nanoparticle dispersions, biological variability from testing living cell lines, and the potential for nano-related interference effects. In this experiment, such challenges were addressed by developing a detailed experimental protocol and using a specially designed 96-well plate layout which incorporated a range of control measurements to assess multiple factors such as nanomaterial interference, pipetting accuracy, cell seeding density, and instrument performance. Detailed data analysis of these control measurements showed that good control of the experiments was attained by all participants in most cases. The main measurement objective of the study was the estimation of a dose response relationship between concentration of the nanoparticles and metabolic activity of the living cells, under several experimental conditions. The dose curve estimation was achieved by imbedding a three parameter logistic curve in a three level Bayesian hierarchical model, accounting for uncertainty due to all known experimental conditions as well as between laboratory variability in a top-down manner. Computation was performed using Markov Chain Monte Carlo methods. The fit of the model was evaluated using Bayesian posterior predictive probabilities and found to be satisfactory.
Weighing the galactic disc using the Jeans equation: lessons from simulations
NASA Astrophysics Data System (ADS)
Candlish, G. N.; Smith, R.; Moni Bidin, C.; Gibson, B. K.
2016-03-01
Using three-dimensional stellar kinematic data from simulated galaxies, we examine the efficacy of a Jeans equation analysis in reconstructing the total disk surface density, including the dark matter, at the `Solar' radius. Our simulation data set includes galaxies formed in a cosmological context using state-of-the-art high-resolution cosmological zoom simulations, and other idealized models. The cosmologically formed galaxies have been demonstrated to lie on many of the observed scaling relations for late-type spirals, and thus offer an interesting surrogate for real galaxies with the obvious advantage that all the kinematical data are known perfectly. We show that the vertical velocity dispersion is typically the dominant kinematic quantity in the analysis, and that the traditional method of using only the vertical force is reasonably effective at low heights above the disk plane. At higher heights the inclusion of the radial force becomes increasingly important. We also show that the method is sensitive to uncertainties in the measured disk parameters, particularly the scalelengths of the assumed double exponential density distribution, and the scalelength of the radial velocity dispersion. In addition, we show that disk structure and low number statistics can lead to significant errors in the calculated surface densities. Finally, we examine the implications of our results for previous studies of this sort, suggesting that more accurate measurements of the scalelengths may help reconcile conflicting estimates of the local dark matter density in the literature.
|Vus| determination from inclusive strange tau decay and lattice HVP
NASA Astrophysics Data System (ADS)
Boyle, Peter; Hudspith, Renwick James; Izubuchi, Taku; Jüttner, Andreas; Lehner, Christoph; Lewis, Randy; Maltman, Kim; Ohki, Hiroshi; Portelli, Antonin; Spraggs, Matthew
2018-03-01
We propose and apply a novel approach to determining |Vus| which uses inclusive strange hadronic tau decay data and hadronic vacuum polarization functions (HVPs) computed on the lattice. The experimental and lattice data are related through dispersion relations which employ a class of weight functions having poles at space-like momentum. Implementing this approach using lattice data generated by the RBC/UKQCD collaboration, we show examples of weight functions which strongly suppress spectral integral contributions from the region where experimental data either have large uncertainties or do not exist while at the same time allowing accurate determinations of relevant lattice HVPs. Our result for |Vus| is in good agreement with determinations from K physics and 3-family CKM unitarity. The advantages of the new approach over the conventional sum rule analysis will be discussed.
Falaggis, Konstantinos; Towers, David P; Towers, Catherine E
2012-09-20
Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Pretest uncertainty analysis for chemical rocket engine tests
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1987-01-01
A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Uncertainties in stormwater runoff data collection from a small urban catchment, Southeast China.
Huang, Jinliang; Tu, Zhenshun; Du, Pengfei; Lin, Jie; Li, Qingsheng
2010-01-01
Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
A KINEMATIC STUDY OF THE ANDROMEDA DWARF SPHEROIDAL SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Michelle L. M.; Martin, Nicolas F.; Chapman, Scott C.
We present a homogeneous kinematic analysis of red giant branch stars within 18 of the 28 Andromeda dwarf spheroidal (dSph) galaxies, obtained using the Keck I/LRIS and Keck II/DEIMOS spectrographs. Based on their g - i colors (taken with the CFHT/MegaCam imager), physical positions on the sky, and radial velocities, we assign probabilities of dSph membership to each observed star. Using this information, the velocity dispersions, central masses, and central densities of the dark matter halos are calculated for these objects, and compared with the properties of the Milky Way dSph population. We also measure the average metallicity ([Fe/H]) frommore » the co-added spectra of member stars for each M31 dSph and find that they are consistent with the trend of decreasing [Fe/H] with luminosity observed in the Milky Way population. We find that three of our studied M31 dSphs appear as significant outliers in terms of their central velocity dispersion, And XIX, XXI, and XXV, all of which have large half-light radii ({approx}> 700 pc) and low velocity dispersions ({sigma}{sub v} < 5 km s{sup -1}). In addition, And XXV has a mass-to-light ratio within its half-light radius of just [M/L]{sub half}=10.3{sup +7.0}{sub -6.7}, making it consistent with a simple stellar system with no appreciable dark matter component within its 1{sigma} uncertainties. We suggest that the structure of the dark matter halos of these outliers have been significantly altered by tides.« less
Estimate of the uncertainty in measurement for the determination of mercury in seafood by TDA AAS.
Torres, Daiane Placido; Olivares, Igor R B; Queiroz, Helena Müller
2015-01-01
An approach for the estimate of the uncertainty in measurement considering the individual sources related to the different steps of the method under evaluation as well as the uncertainties estimated from the validation data for the determination of mercury in seafood by using thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) is proposed. The considered method has been fully optimized and validated in an official laboratory of the Ministry of Agriculture, Livestock and Food Supply of Brazil, in order to comply with national and international food regulations and quality assurance. The referred method has been accredited under the ISO/IEC 17025 norm since 2010. The approach of the present work in order to reach the aim of estimating of the uncertainty in measurement was based on six sources of uncertainty for mercury determination in seafood by TDA AAS, following the validation process, which were: Linear least square regression, Repeatability, Intermediate precision, Correction factor of the analytical curve, Sample mass, and Standard reference solution. Those that most influenced the uncertainty in measurement were sample weight, repeatability, intermediate precision and calibration curve. The obtained result for the estimate of uncertainty in measurement in the present work reached a value of 13.39%, which complies with the European Regulation EC 836/2011. This figure represents a very realistic estimate of the routine conditions, since it fairly encompasses the dispersion obtained from the value attributed to the sample and the value measured by the laboratory analysts. From this outcome, it is possible to infer that the validation data (based on calibration curve, recovery and precision), together with the variation on sample mass, can offer a proper estimate of uncertainty in measurement.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ketabchi, Hamed
2017-12-01
Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.
Optimal Tikhonov Regularization in Finite-Frequency Tomography
NASA Astrophysics Data System (ADS)
Fang, Y.; Yao, Z.; Zhou, Y.
2017-12-01
The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.
Measurement system for diffraction efficiency of convex gratings
NASA Astrophysics Data System (ADS)
Liu, Peng; Chen, Xin-hua; Zhou, Jian-kang; Zhao, Zhi-cheng; Liu, Quan; Luo, Chao; Wang, Xiao-feng; Tang, Min-xue; Shen, Wei-min
2017-08-01
A measurement system for diffraction efficiency of convex gratings is designed. The measurement system mainly includes four components as a light source, a front system, a dispersing system that contains a convex grating, and a detector. Based on the definition and measuring principle of diffraction efficiency, the optical scheme of the measurement system is analyzed and the design result is given. Then, in order to validate the feasibility of the designed system, the measurement system is set up and the diffraction efficiency of a convex grating with the aperture of 35 mm, the curvature-radius of 72mm, the blazed angle of 6.4°, the grating period of 2.5μm and the working waveband of 400nm-900nm is tested. Based on GUM (Guide to the Expression of Uncertainty in Measurement), the uncertainties in the measuring results are evaluated. The measured diffraction efficiency data are compared to the theoretical ones, which are calculated based on the grating groove parameters got by an atomic force microscope and Rigorous Couple Wave Analysis, and the reliability of the measurement system is illustrated. Finally, the measurement performance of the system is analyzed and tested. The results show that, the testing accuracy, the testing stability and the testing repeatability are 2.5%, 0.085% and 3.5% , respectively.
Shamp, Donald D.
2001-01-01
Over the past several decades investigators have extensively examined the 238U-234U- 230Th systematics of a variety of geologic materials using alpha spectroscopy. Analytical uncertainty for 230Th by alpha spectroscopy has been limited to about 2% (2σ). The advantage of thermal ionization mass spectroscopy (TIMS), introduced by Edwards and co-workers in the late 1980’s is the increased detectability of these isotopes by a factor of ~200, and decreases in the uncertainty for 230Th to about 5‰ (2σ) error. This report is a procedural manual for using the USGS-Stanford Finnegan-Mat 262 TIMS to collect and isolate Uranium and Thorium isotopic ratio data. Chemical separation of Uranium and Thorium from the sample media is accomplished using acid dissolution and then processed using anion exchange resins. The Finnegan-Mat262 Thermal Ionization Mass Spectrometer (TIMS) utilizes a surface ionization technique in which nitrates of Uranium and Thorium are placed on a source filament. Upon heating, positive ion emission occurs. The ions are then accelerated and focused into a beam which passes through a curved magnetic field dispersing the ions by mass. Faraday cups and/or an ion counter capture the ions and allow for quantitative analysis of the various isotopes.
Satpathy, Gouri; Tyagi, Yogesh Kumar; Gupta, Rajinder Kumar
2011-08-01
A rapid, effective and ecofriendly method for sensitive screening and quantification of 72 pesticides residue in fruits and vegetables, by microwave-assisted extraction (MAE) followed by dispersive solid-phase extraction (d-SPE), retention time locked (RTL) capillary gas-chromatographic separation in trace ion mode mass spectrometric determination has been validated as per ISO/IEC: 17025:2005. Identification and reporting with total and extracted ion chromatograms were facilitated to a great extent by Deconvolution reporting software (DRS). For all compounds LOD were 0.002-0.02mg/kg and LOQ were 0.025-0.100mg/kg. Correlation coefficients of the calibration curves in the range of 0.025-0.50mg/kg were >0.993. To validate matrix effects repeatability, reproducibility, recovery and overall uncertainty were calculated for the 35 matrices at 0.025, 0.050 and 0.100mg/kg. Recovery ranged between 72% and 114% with RSD of <20% for repeatability and intermediate precision. The reproducibility of the method was evaluated by an inter laboratory participation and Z score obtained within ±2. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solis, D.
1998-10-16
The DART code is based upon a thermomechanical model that can predict swelling, recrystallization, fuel-meat interdiffusion and other issues related with MTR dispersed FE behavior under irradiation. As a part of a common effort to develop an optimized version of DART, a comparison between DART predictions and CNEA miniplates irradiation experimental data was made. The irradiation took place during 1981-82 for U3O8 miniplates and 1985-86 for U{sub 3}Si{sub x} at Oak Ridge Research Reactor (ORR). The microphotographs were studied by means of IMAWIN 3.0 Image Analysis Code and different fission gas bubbles distributions were obtained. Also it was possible tomore » find and identify different morphologic zones. In both kinds of fuels, different phases were recognized, like particle peripheral zones with evidence of Al-U reaction, internal recrystallized zones and bubbles. A very good agreement between code prediction and irradiation results was found. The few discrepancies are due to local, fabrication and irradiation uncertainties, as the presence of U{sub 3}Si phase in U{sub 3}Si{sub 2} particles and effective burnup.« less
The structure of particle-laden jets and nonevaporating sprays
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Zhang, Q. F.; Faeth, G. M.
1983-01-01
Mean and fluctuating gas velocities, liquid mass fluxes and drop sizes were in nonevaporating sprays. These results, as well as existing measurements in solid particle-laden jets, were used to evaluate models of these processes. The following models were considered: (1) a locally homogeneous flow (LHF) model, where slip between the phases was neglected; (2) a deterministic separated flow (DSF) model, where slip was considered but effects of particle dispersion by turbulence were ignored; and (3) a stochastic separated flow (SSF) model, where effects of interphase slip and turbulent dispersion were considered using random-walk computations for particle motion. The LHF and DSF models did not provide very satisfactory predictions over the present data base. In contrast, the SSF model performed reasonably well - including conditions in nonevaporating sprays where enhanced dispersion of particles by turbulence caused the spray to spread more rapidly than single-phase jets for comparable conditions. While these results are encouraging, uncertainties in initial conditions limit the reliability of the evaluation. Current work is seeking to eliminate this deficiency.
NASA Astrophysics Data System (ADS)
Kelson, Daniel D.; Williams, Rik J.; Dressler, Alan; McCarthy, Patrick J.; Shectman, Stephen A.; Mulchaey, John S.; Villanueva, Edward V.; Crane, Jeffrey D.; Quadri, Ryan F.
2014-03-01
We describe the Carnegie-Spitzer-IMACS (CSI) Survey, a wide-field, near-IR selected spectrophotometric redshift survey with the Inamori Magellan Areal Camera and Spectrograph (IMACS) on Magellan-Baade. By defining a flux-limited sample of galaxies in Spitzer Infrared Array Camera 3.6 μm imaging of SWIRE fields, the CSI Survey efficiently traces the stellar mass of average galaxies to z ~ 1.5. This first paper provides an overview of the survey selection, observations, processing of the photometry and spectrophotometry. We also describe the processing of the data: new methods of fitting synthetic templates of spectral energy distributions are used to derive redshifts, stellar masses, emission line luminosities, and coarse information on recent star formation. Our unique methodology for analyzing low-dispersion spectra taken with multilayer prisms in IMACS, combined with panchromatic photometry from the ultraviolet to the IR, has yielded high-quality redshifts for 43,347 galaxies in our first 5.3 deg2 of the SWIRE XMM-LSS field. We use three different approaches to estimate our redshift errors and find robust agreement. Over the full range of 3.6 μm fluxes of our selection, we find typical redshift uncertainties of σ z /(1 + z) <~ 0.015. In comparisons with previously published spectroscopic redshifts we find scatters of σ z /(1 + z) = 0.011 for galaxies at 0.7 <= z <= 0.9, and σ z /(1 + z) = 0.014 for galaxies at 0.9 <= z <= 1.2. For galaxies brighter and fainter than i = 23 mag, we find σ z /(1 + z) = 0.008 and σ z /(1 + z) = 0.022, respectively. Notably, our low-dispersion spectroscopy and analysis yields comparable redshift uncertainties and success rates for both red and blue galaxies, largely eliminating color-based systematics that can seriously bias observed dependencies of galaxy evolution on environment. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
Climate Change and Integrodifference Equations in a Stochastic Environment.
Bouhours, Juliette; Lewis, Mark A
2016-09-01
Climate change impacts population distributions, forcing some species to migrate poleward if they are to survive and keep up with the suitable habitat that is shifting with the temperature isoclines. Previous studies have analysed whether populations have the capacity to keep up with shifting temperature isoclines, and have mathematically determined the combination of growth and dispersal that is needed to achieve this. However, the rate of isocline movement can be highly variable, with much uncertainty associated with yearly shifts. The same is true for population growth rates. Growth rates can be variable and uncertain, even within suitable habitats for growth. In this paper, we reanalyse the question of population persistence in the context of the uncertainty and variability in isocline shifts and rates of growth. Specifically, we employ a stochastic integrodifference equation model on a patch of suitable habitat that shifts poleward at a random rate. We derive a metric describing the asymptotic growth rate of the linearised operator of the stochastic model. This metric yields a threshold criterion for population persistence. We demonstrate that the variability in the yearly shift and in the growth rate has a significant negative effect on the persistence in the sense that it decreases the threshold criterion for population persistence. Mathematically, we show how the persistence metric can be connected to the principal eigenvalue problem for a related integral operator, at least for the case where isocline shifting speed is deterministic. Analysis of dynamics for the case where the dispersal kernel is Gaussian leads to the existence of a critical shifting speed, above which the population will go extinct, and below which the population will persist. This leads to clear bounds on rate of environmental change if the population is to persist. Finally, we illustrate our different results for butterfly population using numerical simulations and demonstrate how increased variances in isocline shifts and growth rates translate into decreased likelihoods of persistence.
NASA Astrophysics Data System (ADS)
Margheri, Luca; Sagaut, Pierre
2016-11-01
To significantly increase the contribution of numerical computational fluid dynamics (CFD) simulation for risk assessment and decision making, it is important to quantitatively measure the impact of uncertainties to assess the reliability and robustness of the results. As unsteady high-fidelity CFD simulations are becoming the standard for industrial applications, reducing the number of required samples to perform sensitivity (SA) and uncertainty quantification (UQ) analysis is an actual engineering challenge. The novel approach presented in this paper is based on an efficient hybridization between the anchored-ANOVA and the POD/Kriging methods, which have already been used in CFD-UQ realistic applications, and the definition of best practices to achieve global accuracy. The anchored-ANOVA method is used to efficiently reduce the UQ dimension space, while the POD/Kriging is used to smooth and interpolate each anchored-ANOVA term. The main advantages of the proposed method are illustrated through four applications with increasing complexity, most of them based on Large-Eddy Simulation as a high-fidelity CFD tool: the turbulent channel flow, the flow around an isolated bluff-body, a pedestrian wind comfort study in a full scale urban area and an application to toxic gas dispersion in a full scale city area. The proposed c-APK method (anchored-ANOVA-POD/Kriging) inherits the advantages of each key element: interpolation through POD/Kriging precludes the use of quadrature schemes therefore allowing for a more flexible sampling strategy while the ANOVA decomposition allows for a better domain exploration. A comparison of the three methods is given for each application. In addition, the importance of adding flexibility to the control parameters and the choice of the quantity of interest (QoI) are discussed. As a result, global accuracy can be achieved with a reasonable number of samples allowing computationally expensive CFD-UQ analysis.
Martin, Antony; Yong, Alan K.; Salomone, Larry A.
2014-01-01
Active-source Love waves, recorded by the multi-channel analysis of surface wave (MASLW) technique, were recently analyzed in two site characterization projects. Between 2010 and 2012, the 2009 American Recovery and Reinvestment Act (ARRA) funded GEOVision to conduct geophysical investigations at 191 seismographic stations in California and the Central Eastern U.S. (CEUS). The original project plan was to utilize active and passive Rayleigh wave-based techniques to obtain shear-wave velocity (VS) profiles to a minimum depth of 30 m and the time-averaged VS of the upper 30 meters (VS30). Early in this investigation it became clear that Rayleigh wave techniques, such as multi-channel analysis of surface waves (MASRW), were not suited for characterizing all sites. Shear-wave seismic refraction and MASLW techniques were therefore applied. In 2012, the Electric Power Research Institute funded characterization of 33 CEUS station sites. Based on experience from the ARRA investigation, both MASRW and MASLW data were acquired by GEOVision at 24 CEUS sites. At shallow rock sites, sites with steep velocity gradients, and, sites with a thin, low velocity, surficial soil layer overlying stiffer sediments, Love wave techniques generally were found to be easier to interpret, i.e., Love wave data typically yielded unambiguous fundamental mode dispersion curves and thus, reduce uncertainty in the resultant VS model. These types of velocity structure often excite dominant higher modes in Rayleigh wave data, but not in the Love wave data. It is possible to model Rayleigh wave data using multi- or effective-mode techniques; however, extraction of Rayleigh wave dispersion data was found to be difficult in many cases. These results imply that field procedures should include careful scrutiny of Rayleigh wave-based dispersion data in order to also collect Love wave data when warranted.
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac
1987-01-01
A preliminary uncertainty analysis was performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis is presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac
1987-01-01
A preliminary uncertainty analysis has been performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis are presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.
Numerical simulations of atmospheric dispersion of iodine-131 by different models.
Leelőssy, Ádám; Mészáros, Róbert; Kovács, Attila; Lagzi, István; Kovács, Tibor
2017-01-01
Nowadays, several dispersion models are available to simulate the transport processes of air pollutants and toxic substances including radionuclides in the atmosphere. Reliability of atmospheric transport models has been demonstrated in several recent cases from local to global scale; however, very few actual emission data are available to evaluate model results in real-life cases. In this study, the atmospheric dispersion of 131I emitted to the atmosphere during an industrial process was simulated with different models, namely the WRF-Chem Eulerian online coupled model and the HYSPLIT and the RAPTOR Lagrangian models. Although only limited data of 131I detections has been available, the accuracy of modeled plume direction could be evaluated in complex late autumn weather situations. For the studied cases, the general reliability of models has been demonstrated. However, serious uncertainties arise related to low level inversions, above all in case of an emission event on 4 November 2011, when an important wind shear caused a significant difference between simulated and real transport directions. Results underline the importance of prudent interpretation of dispersion model results and the identification of weather conditions with a potential to cause large model errors.
Thermal niche estimators and the capability of poor dispersal species to cope with climate change
NASA Astrophysics Data System (ADS)
Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio
2016-03-01
For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience.
Stellar Velocity Dispersion and Anisotropy of the Milky Way Inner Halo
NASA Astrophysics Data System (ADS)
King, Charles, III; Brown, Warren R.; Geller, Margaret J.; Kenyon, Scott J.
2015-11-01
We measure the three components of velocity dispersion, σR, σθ, σϕ, for stars within 6 < R < 30 kpc of the Milky Way using a new radial velocity sample from the MMT telescope. We combine our measurements with previously published data so that we can more finely sample the stellar halo. We use a maximum likelihood statistical method for estimating mean velocities, dispersions, and covariances assuming only that velocities are normally distributed. The alignment of the velocity ellipsoid is consistent with a spherically symmetric gravitational potential. From the spherical Jeans equation, the mass of the Milky Way is M≤ft(R≤slant 12 {kpc}\\right)=1.3× {10}11 {M}⊙ with an uncertainty of 40%. We also find a region of discontinuity, 15 ≲ R ≲ 25 kpc, where the estimated velocity dispersions and anisotropies diverge from their anticipated values, confirming the break observed by others. We argue that this break in anisotropy is physically explained by coherent stellar velocity structure in the halo, such as the Sgr stream. To significantly improve our understanding of halo kinematics will require combining radial velocities with future Gaia proper motions.
Adaptation of flower and fruit colours to multiple, distinct mutualists.
Renoult, Julien P; Valido, Alfredo; Jordano, Pedro; Schaefer, H Martin
2014-01-01
Communication in plant-animal mutualisms frequently involves multiple perceivers. A fundamental uncertainty is whether and how species adapt to communicate with groups of mutualists having distinct sensory abilities. We quantified the colour conspicuousness of flowers and fruits originating from one European and two South American plant communities, using visual models of pollinators (bee and fly) and seed dispersers (bird, primate and marten). We show that flowers are more conspicuous than fruits to pollinators, and the reverse to seed dispersers. In addition, flowers are more conspicuous to pollinators than to seed dispersers and the reverse for fruits. Thus, despite marked differences in the visual systems of mutualists, flower and fruit colours have evolved to attract multiple, distinct mutualists but not unintended perceivers. We show that this adaptation is facilitated by a limited correlation between flower and fruit colours, and by the fact that colour signals as coded at the photoreceptor level are more similar within than between functional groups (pollinators and seed dispersers). Overall, these results provide the first quantitative demonstration that flower and fruit colours are adaptations allowing plants to communicate simultaneously with distinct groups of mutualists. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Numerical simulations of atmospheric dispersion of iodine-131 by different models
Leelőssy, Ádám; Mészáros, Róbert; Kovács, Attila; Lagzi, István; Kovács, Tibor
2017-01-01
Nowadays, several dispersion models are available to simulate the transport processes of air pollutants and toxic substances including radionuclides in the atmosphere. Reliability of atmospheric transport models has been demonstrated in several recent cases from local to global scale; however, very few actual emission data are available to evaluate model results in real-life cases. In this study, the atmospheric dispersion of 131I emitted to the atmosphere during an industrial process was simulated with different models, namely the WRF-Chem Eulerian online coupled model and the HYSPLIT and the RAPTOR Lagrangian models. Although only limited data of 131I detections has been available, the accuracy of modeled plume direction could be evaluated in complex late autumn weather situations. For the studied cases, the general reliability of models has been demonstrated. However, serious uncertainties arise related to low level inversions, above all in case of an emission event on 4 November 2011, when an important wind shear caused a significant difference between simulated and real transport directions. Results underline the importance of prudent interpretation of dispersion model results and the identification of weather conditions with a potential to cause large model errors. PMID:28207853
Hierarchical spatiotemporal matrix models for characterizing invasions
Hooten, M.B.; Wikle, C.K.; Dorazio, R.M.; Royle, J. Andrew
2007-01-01
The growth and dispersal of biotic organisms is an important subject in ecology. Ecologists are able to accurately describe survival and fecundity in plant and animal populations and have developed quantitative approaches to study the dynamics of dispersal and population size. Of particular interest are the dynamics of invasive species. Such nonindigenous animals and plants can levy significant impacts on native biotic communities. Effective models for relative abundance have been developed; however, a better understanding of the dynamics of actual population size (as opposed to relative abundance) in an invasion would be beneficial to all branches of ecology. In this article, we adopt a hierarchical Bayesian framework for modeling the invasion of such species while addressing the discrete nature of the data and uncertainty associated with the probability of detection. The nonlinear dynamics between discrete time points are intuitively modeled through an embedded deterministic population model with density-dependent growth and dispersal components. Additionally, we illustrate the importance of accommodating spatially varying dispersal rates. The method is applied to the specific case of the Eurasian Collared-Dove, an invasive species at mid-invasion in the United States at the time of this writing.
Hierarchical spatiotemporal matrix models for characterizing invasions
Hooten, M.B.; Wikle, C.K.; Dorazio, R.M.; Royle, J. Andrew
2007-01-01
The growth and dispersal of biotic organisms is an important subject in ecology. Ecologists are able to accurately describe survival and fecundity in plant and animal populations and have developed quantitative approaches to study the dynamics of dispersal and population size. Of particular interest are the dynamics of invasive species. Such nonindigenous animals and plants can levy significant impacts on native biotic communities. Effective models for relative abundance have been developed; however, a better understanding of the dynamics of actual population size (as opposed to relative abundance) in an invasion would be beneficial to all branches of ecology. In this article, we adopt a hierarchical Bayesian framework for modeling the invasion of such species while addressing the discrete nature of the data and uncertainty associated with the probability of detection. The nonlinear dynamics between discrete time points are intuitively modeled through an embedded deterministic population model with density-dependent growth and dispersal components. Additionally, we illustrate the importance of accommodating spatially varying dispersal rates. The method is applied to the specific case of the Eurasian Collared-Dove, an invasive species at mid-invasion in the United States at the time of this writing. ?? 2006, The International Biometric Society.
Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S
2018-09-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.
Uncertainty Analysis of NASA Glenn's 8- by 6-Foot Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Stephens, Julia E.; Hubbard, Erin P.; Walter, Joel A.; McElroy, Tyler
2016-01-01
An analysis was performed to determine the measurement uncertainty of the Mach Number of the 8- by 6-foot Supersonic Wind Tunnel at the NASA Glenn Research Center. This paper details the analysis process used, including methods for handling limited data and complicated data correlations. Due to the complexity of the equations used, a Monte Carlo Method was utilized for this uncertainty analysis. A summary of the findings are presented as pertains to understanding what the uncertainties are, how they impact various research tests in the facility, and methods of reducing the uncertainties in the future.
Yoo, Kyung Hee
2007-06-01
This study was conducted to investigate the correlation among uncertainty, mastery and appraisal of uncertainty in hospitalized children's mothers. Self report questionnaires were used to measure the variables. Variables were uncertainty, mastery and appraisal of uncertainty. In data analysis, the SPSSWIN 12.0 program was utilized for descriptive statistics, Pearson's correlation coefficients, and regression analysis. Reliability of the instruments was cronbach's alpha=.84~.94. Mastery negatively correlated with uncertainty(r=-.444, p=.000) and danger appraisal of uncertainty(r=-.514, p=.000). In regression of danger appraisal of uncertainty, uncertainty and mastery were significant predictors explaining 39.9%. Mastery was a significant mediating factor between uncertainty and danger appraisal of uncertainty in hospitalized children's mothers. Therefore, nursing interventions which improve mastery must be developed for hospitalized children's mothers.
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
Test of Parameterized Post-Newtonian Gravity with Galaxy-scale Strong Lensing Systems
NASA Astrophysics Data System (ADS)
Cao, Shuo; Li, Xiaolei; Biesiada, Marek; Xu, Tengpeng; Cai, Yongzhi; Zhu, Zong-Hong
2017-01-01
Based on a mass-selected sample of galaxy-scale strong gravitational lenses from the SLACS, BELLS, LSD, and SL2S surveys and using a well-motivated fiducial set of lens-galaxy parameters, we tested the weak-field metric on kiloparsec scales and found a constraint on the post-Newtonian parameter γ ={0.995}-0.047+0.037 under the assumption of a flat ΛCDM universe with parameters taken from Planck observations. General relativity (GR) predicts exactly γ = 1. Uncertainties concerning the total mass density profile, anisotropy of the velocity dispersion, and the shape of the light profile combine to systematic uncertainties of ˜25%. By applying a cosmological model-independent method to the simulated future LSST data, we found a significant degeneracy between the PPN γ parameter and the spatial curvature of the universe. Setting a prior on the cosmic curvature parameter -0.007 < Ωk < 0.006, we obtained the constraint on the PPN parameter that γ ={1.000}-0.0025+0.0023. We conclude that strong lensing systems with measured stellar velocity dispersions may serve as another important probe to investigate validity of the GR, if the mass-dynamical structure of the lensing galaxies is accurately constrained in future lens surveys.
Support of gas flowmeter upgrade
NASA Technical Reports Server (NTRS)
Waugaman, Dennis
1996-01-01
A project history review, literature review, and vendor search were conducted to identify a flowmeter that would improve the accuracy of gaseous flow measurements in the White Sands Test Facility (WSTF) Calibration Laboratory and the Hydrogen High Flow Facility. Both facilities currently use sonic flow nozzles to measure flowrates. The flow nozzle pressure drops combined with corresponding pressure and temperature measurements have been estimated to produce uncertainties in flowrate measurements of 2 to 5 percent. This study investigated the state of flowmeter technology to make recommendations that would reduce those uncertainties. Most flowmeters measure velocity and volume, therefore mass flow measurement must be calculated based on additional pressures and temperature measurement which contribute to the error. The two exceptions are thermal dispersion meters and Coriolis mass flowmeters. The thermal dispersion meters are accurate to 1 to 5 percent. The Coriolis meters are significantly more accurate, at least for liquids. For gases, there is evidence they may be accurate to within 0.5 percent or better of the flowrate, but there may be limitations due to inappropriate velocity, pressure, Mach number and vibration disturbances. In this report, a comparison of flowmeters is presented. Candidate Coriolis meters and a methodology to qualify the meter with tests both at WSTF and Southwest Research Institute are recommended and outlined.
NASA Astrophysics Data System (ADS)
Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo
2016-04-01
We present a probabilistic framework for assessing human health risk due to groundwater contamination. Our goal is to quantify how physical hydrogeological and biochemical parameters control the magnitude and uncertainty of human health risk. Our methodology captures the whole risk chain from the aquifer contamination to the tap water assumption by human population. The contaminant concentration, the key parameter for the risk estimation, is governed by the interplay between the large-scale advection, caused by heterogeneity and the degradation processes strictly related to the local scale dispersion processes. The core of the hazard identification and of the methodology is the reactive transport model: erratic displacement of contaminant in groundwater, due to the spatial variability of hydraulic conductivity (K), is characterized by a first-order Lagrangian stochastic model; different dynamics are considered as possible ways of biodegradation in aerobic and anaerobic conditions. With the goal of quantifying uncertainty, the Beta distribution is assumed for the concentration probability density function (pdf) model, while different levels of approximation are explored for the estimation of the one-point concentration moments. The information pertaining the flow and transport is connected with a proper dose response assessment which generally involves the estimation of physiological parameters of the exposed population. Human health response depends on the exposed individual metabolism (e.g. variability) and is subject to uncertainty. Therefore, the health parameters are intrinsically a stochastic. As a consequence, we provide an integrated in a global probabilistic human health risk framework which allows the propagation of the uncertainty from multiple sources. The final result, the health risk pdf, is expressed as function of a few relevant, physically-based parameters such as the size of the injection area, the Péclet number, the K structure metrics and covariance shape, reaction parameters pertaining to aerobic and anaerobic degradation processes respectively as well as the dose response parameters. Even though the final result assumes a relatively simple form, few numerical quadratures are required in order to evaluate the trajectory moments of the solute plume. In order to perform a sensitivity analysis we apply the methodology to a hypothetical case study. The scenario investigated is made by an aquifer which constitutes a water supply for a population where a continuous source of NAPL contaminant feeds a steady plume. The risk analysis is limited to carcinogenic compounds for which the well-known linear relation for human risk is assumed. Analysis performed shows few interesting findings: the risk distribution is strictly dependent on the pore scale dynamics that trigger dilution and mixing; biodegradation may involve a significant reduction of the risk.
NASA Technical Reports Server (NTRS)
Wang, T.; Simon, T. W.
1988-01-01
Development of a recent experimental program to investigate the effects of streamwise curvature on boundary layer transition required making a bendable, heated and instrumented test wall, a rather nonconventional surface. The present paper describes this surface, the design choices made in its development and how uncertainty analysis was used, beginning early in the test program, to make such design choices. Published uncertainty analysis techniques were found to be of great value; but, it became clear that another step, one herein called the pre-test analysis, would aid the program development. Finally, it is shown how the uncertainty analysis was used to determine whether the test surface was qualified for service.
Uncertainty analysis of hydrological modeling in a tropical area using different algorithms
NASA Astrophysics Data System (ADS)
Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh
2018-01-01
Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David
2015-08-01
Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Dispersion analysis of passive surface-wave noise generated during hydraulic-fracturing operations
Forghani-Arani, Farnoush; Willis, Mark; Snieder, Roel; Haines, Seth S.; Behura, Jyoti; Batzle, Mike; Davidson, Michael
2014-01-01
Surface-wave dispersion analysis is useful for estimating near-surface shear-wave velocity models, designing receiver arrays, and suppressing surface waves. Here, we analyze whether passive seismic noise generated during hydraulic-fracturing operations can be used to extract surface-wave dispersion characteristics. Applying seismic interferometry to noise measurements, we extract surface waves by cross-correlating several minutes of passive records; this approach is distinct from previous studies that used hours or days of passive records for cross-correlation. For comparison, we also perform dispersion analysis for an active-source array that has some receivers in common with the passive array. The active and passive data show good agreement in the dispersive character of the fundamental-mode surface-waves. For the higher mode surface waves, however, active and passive data resolve the dispersive properties at different frequency ranges. To demonstrate an application of dispersion analysis, we invert the observed surface-wave dispersion characteristics to determine the near-surface, one-dimensional shear-wave velocity.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
Durability reliability analysis for corroding concrete structures under uncertainty
NASA Astrophysics Data System (ADS)
Zhang, Hao
2018-02-01
This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
Determination of the Sun's offset from the Galactic plane using pulsars
NASA Astrophysics Data System (ADS)
Yao, J. M.; Manchester, R. N.; Wang, N.
2017-07-01
We derive the Sun's offset from the local mean Galactic plane (z⊙) using the observed z-distribution of young pulsars. Pulsar distances are obtained from measurements of annual parallax, H I absorption spectra or associations where available and otherwise from the observed pulsar dispersion and a model for the distribution of free electrons in the Galaxy. We fit the cumulative distribution function for a sech2(z)-distribution function, representing an isothermal self-gravitating disc, with uncertainties being estimated using the bootstrap method. We take pulsars having characteristic age τc ≲ 106.5 yr and located within 4.5 kpc of the Sun, omitting those within the local spiral arm and those significantly affected by the Galactic warp, and solve for z⊙ and the scaleheight, H, for different cut-offs in τc. We compute these quantities using just the independently determined distances and these together with dispersion measure (DM)-based distances separately using the YMW16 and NE2001 Galactic electron density models. We find that an age cut-off at 105.75 yr with YMW16 DM distances gives the best results with a minimum uncertainty in z⊙ and an asymptotically stable value for H showing that, at this age and below, the observed pulsar z-distribution is dominated by the dispersion in their birth locations. From this sample of 115 pulsars, we obtain z⊙ = 13.4 ± 4.4 pc and H = 56.9 ± 6.5 pc, similar to estimated scaleheights for OB stars and open clusters. Consistent results are obtained using the independent-only distances and using the NE2001 model for the DM-based distances.
Uncertainty in monitoring E. coli concentrations in streams and stormwater runoff
NASA Astrophysics Data System (ADS)
Harmel, R. D.; Hathaway, J. M.; Wagner, K. L.; Wolfe, J. E.; Karthikeyan, R.; Francesconi, W.; McCarthy, D. T.
2016-03-01
Microbial contamination of surface waters, a substantial public health concern throughout the world, is typically identified by fecal indicator bacteria such as Escherichia coli. Thus, monitoring E. coli concentrations is critical to evaluate current conditions, determine restoration effectiveness, and inform model development and calibration. An often overlooked component of these monitoring and modeling activities is understanding the inherent random and systematic uncertainty present in measured data. In this research, a review and subsequent analysis was performed to identify, document, and analyze measurement uncertainty of E. coli data collected in stream flow and stormwater runoff as individual discrete samples or throughout a single runoff event. Data on the uncertainty contributed by sample collection, sample preservation/storage, and laboratory analysis in measured E. coli concentrations were compiled and analyzed, and differences in sampling method and data quality scenarios were compared. The analysis showed that: (1) manual integrated sampling produced the lowest random and systematic uncertainty in individual samples, but automated sampling typically produced the lowest uncertainty when sampling throughout runoff events; (2) sample collection procedures often contributed the highest amount of uncertainty, although laboratory analysis introduced substantial random uncertainty and preservation/storage introduced substantial systematic uncertainty under some scenarios; and (3) the uncertainty in measured E. coli concentrations was greater than that of sediment and nutrients, but the difference was not as great as may be assumed. This comprehensive analysis of uncertainty in E. coli concentrations measured in streamflow and runoff should provide valuable insight for designing E. coli monitoring projects, reducing uncertainty in quality assurance efforts, regulatory and policy decision making, and fate and transport modeling.
NASA Astrophysics Data System (ADS)
Fu, Libi; Song, Weiguo; Lo, Siuming
2017-01-01
Emergencies involved in mass events are related to a variety of factors and processes. An important factor is the transmission of information on danger that has an influence on nonlinear crowd dynamics during the process of crowd dispersion. Due to much uncertainty in this process, there is an urgent need to propose a method to investigate the influence. In this paper, a novel fuzzy-theory-based method is presented to study crowd dynamics under the influence of information transmission. Fuzzy functions and rules are designed for the ambiguous description of human states. Reasonable inference is employed to decide the output values of decision making such as pedestrian movement speed and directions. Through simulation under four-way pedestrian situations, good crowd dispersion phenomena are achieved. Simulation results under different conditions demonstrate that information transmission cannot always induce successful crowd dispersion in all situations. This depends on whether decision strategies in response to information on danger are unified and effective, especially in dense crowds. Results also suggest that an increase in drift strength at low density and the percentage of pedestrians, who choose one of the furthest unoccupied Von Neumann neighbors from the dangerous source as the drift direction at high density, is helpful in crowd dispersion. Compared with previous work, our comprehensive study improves an in-depth understanding of nonlinear crowd dynamics under the effect of information on danger.
Calibrating the Planck Cluster Mass Scale with Cluster Velocity Dispersions
NASA Astrophysics Data System (ADS)
Amodeo, Stefania; Mei, Simona; Stanford, Spencer A.; Bartlett, James G.; Melin, Jean-Baptiste; Lawrence, Charles R.; Chary, Ranga-Ram; Shim, Hyunjin; Marleau, Francine; Stern, Daniel
2017-08-01
We measure the Planck cluster mass bias using dynamical mass measurements based on velocity dispersions of a subsample of 17 Planck-detected clusters. The velocity dispersions were calculated using redshifts determined from spectra that were obtained at the Gemini observatory with the GMOS multi-object spectrograph. We correct our estimates for effects due to finite aperture, Eddington bias, and correlated scatter between velocity dispersion and the Planck mass proxy. The result for the mass bias parameter, (1-b), depends on the value of the galaxy velocity bias, {b}{{v}}, adopted from simulations: (1-b)=(0.51+/- 0.09){b}{{v}}3. Using a velocity bias of {b}{{v}}=1.08 from Munari et al., we obtain (1-b)=0.64+/- 0.11, I.e., an error of 17% on the mass bias measurement with 17 clusters. This mass bias value is consistent with most previous weak-lensing determinations. It lies within 1σ of the value that is needed to reconcile the Planck cluster counts with the Planck primary cosmic microwave background constraints. We emphasize that uncertainty in the velocity bias severely hampers the precision of the measurements of the mass bias using velocity dispersions. On the other hand, when we fix the Planck mass bias using the constraints from Penna-Lima et al., based on weak-lensing measurements, we obtain a positive velocity bias of {b}{{v}}≳ 0.9 at 3σ .
NASA Astrophysics Data System (ADS)
Enzenhoefer, R.; Rodriguez-Pretelin, A.; Nowak, W.
2012-12-01
"From an engineering standpoint, the quantification of uncertainty is extremely important not only because it allows estimating risk but mostly because it allows taking optimal decisions in an uncertain framework" (Renard, 2007). The most common way to account for uncertainty in the field of subsurface hydrology and wellhead protection is to randomize spatial parameters, e.g. the log-hydraulic conductivity or porosity. This enables water managers to take robust decisions in delineating wellhead protection zones with rationally chosen safety margins in the spirit of probabilistic risk management. Probabilistic wellhead protection zones are commonly based on steady-state flow fields. However, several past studies showed that transient flow conditions may substantially influence the shape and extent of catchments. Therefore, we believe they should be accounted for in the probabilistic assessment and in the delineation process. The aim of our work is to show the significance of flow transients and to investigate the interplay between spatial uncertainty and flow transients in wellhead protection zone delineation. To this end, we advance our concept of probabilistic capture zone delineation (Enzenhoefer et al., 2012) that works with capture probabilities and other probabilistic criteria for delineation. The extended framework is able to evaluate the time fraction that any point on a map falls within a capture zone. In short, we separate capture probabilities into spatial/statistical and time-related frequencies. This will provide water managers additional information on how to manage a well catchment in the light of possible hazard conditions close to the capture boundary under uncertain and time-variable flow conditions. In order to save computational costs, we take advantage of super-positioned flow components with time-variable coefficients. We assume an instantaneous development of steady-state flow conditions after each temporal change in driving forces, following an idea by Festger and Walter, 2002. These quasi steady-state flow fields are cast into a geostatistical Monte Carlo framework to admit and evaluate the influence of parameter uncertainty on the delineation process. Furthermore, this framework enables conditioning on observed data with any conditioning scheme, such as rejection sampling, Ensemble Kalman Filters, etc. To further reduce the computational load, we use the reverse formulation of advective-dispersive transport. We simulate the reverse transport by particle tracking random walk in order to avoid numerical dispersion to account for well arrival times.
An uncertainty analysis of wildfire modeling [Chapter 13
Karin Riley; Matthew Thompson
2017-01-01
Before fire models can be understood, evaluated, and effectively applied to support decision making, model-based uncertainties must be analyzed. In this chapter, we identify and classify sources of uncertainty using an established analytical framework, and summarize results graphically in an uncertainty matrix. Our analysis facilitates characterization of the...
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Strange resonance poles from Kπ scattering below 1.8 GeV
NASA Astrophysics Data System (ADS)
Pelaez, J. R.; Rodas, A.; Ruiz de Elvira, J.
2017-02-01
In this work we present a determination of the mass, width, and coupling of the resonances that appear in kaon-pion scattering below 1.8 GeV. These are: the much debated scalar κ -meson, nowadays known as K_0^*(800), the scalar K_0^*(1430), the K^*(892) and K_1^*(1410) vectors, the spin-two K_2^*(1430) as well as the spin-three K^*_3(1780). The parameters will be determined from the pole associated to each resonance by means of an analytic continuation of the Kπ scattering amplitudes obtained in a recent and precise data analysis constrained with dispersion relations, which were not well satisfied in previous analyses. This analytic continuation will be performed by means of Padé approximants, thus avoiding a particular model for the pole parameterization. We also pay particular attention to the evaluation of uncertainties.
Governance Structures for Open Innovation: A Preliminary Framework
NASA Astrophysics Data System (ADS)
Feller, Joseph; Finnegan, Patrick; Hayes, Jeremy; O'Reilly, Philip
This research-in-progress paper presents a preliminary framework of four open innovation governance structures. The study seeks to describe four distinct ways in which firms utilize hierarchical relationships, organizational intermediaries, and the market system to supply and acquire intellectual property and/or innovation capabilities from sources external to the firm. This paper reports on phase one of the study, which involved an analysis of six open innovation exemplars based on public data. This phase of the study reveals that governance structures for open innovation can be categorized based on whether they (1) are mediated or direct or (2) seek to acquire intellectual property or innovation capability. We analyze the differences in four governance structures along seven dimensions, and reveal the importance of knowledge dispersion and uncertainty to the use of open innovation hierarchies, brokerages, and markets. The paper concludes by examining the implications of the findings and outlining the next phase of the study.
De Meutter, Pieter; Camps, Johan; Delcloo, Andy; Termonia, Piet
2017-08-18
On 6 January 2016, the Democratic People's Republic of Korea announced to have conducted its fourth nuclear test. Analysis of the corresponding seismic waves from the Punggye-ri nuclear test site showed indeed that an underground man-made explosion took place, although the nuclear origin of the explosion needs confirmation. Seven weeks after the announced nuclear test, radioactive xenon was observed in Japan by a noble gas measurement station of the International Monitoring System. In this paper, atmospheric transport modelling is used to show that the measured radioactive xenon is compatible with a delayed release from the Punggye-ri nuclear test site. An uncertainty quantification on the modelling results is given by using the ensemble method. The latter is important for policy makers and helps advance data fusion, where different nuclear Test-Ban-Treaty monitoring techniques are combined.
A review of numerical models to predict the atmospheric dispersion of radionuclides.
Leelőssy, Ádám; Lagzi, István; Kovács, Attila; Mészáros, Róbert
2018-02-01
The field of atmospheric dispersion modeling has evolved together with nuclear risk assessment and emergency response systems. Atmospheric concentration and deposition of radionuclides originating from an unintended release provide the basis of dose estimations and countermeasure strategies. To predict the atmospheric dispersion and deposition of radionuclides several numerical models are available coupled with numerical weather prediction (NWP) systems. This work provides a review of the main concepts and different approaches of atmospheric dispersion modeling. Key processes of the atmospheric transport of radionuclides are emission, advection, turbulent diffusion, dry and wet deposition, radioactive decay and other physical and chemical transformations. A wide range of modeling software are available to simulate these processes with different physical assumptions, numerical approaches and implementation. The most appropriate modeling tool for a specific purpose can be selected based on the spatial scale, the complexity of meteorology, land surface and physical and chemical transformations, also considering the available data and computational resource. For most regulatory and operational applications, offline coupled NWP-dispersion systems are used, either with a local scale Gaussian, or a regional to global scale Eulerian or Lagrangian approach. The dispersion model results show large sensitivity on the accuracy of the coupled NWP model, especially through the description of planetary boundary layer turbulence, deep convection and wet deposition. Improvement of dispersion predictions can be achieved by online coupling of mesoscale meteorology and atmospheric transport models. The 2011 Fukushima event was the first large-scale nuclear accident where real-time prognostic dispersion modeling provided decision support. Dozens of dispersion models with different approaches were used for prognostic and retrospective simulations of the Fukushima release. An unknown release rate proved to be the largest factor of uncertainty, underlining the importance of inverse modeling and data assimilation in future developments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, D.W.; Yambert, M.W.; Kocher, D.C.
1994-12-31
A performance assessment of the operating Solid Waste Storage Area 6 (SWSA 6) facility for the disposal of low-level radioactive waste at the Oak Ridge National Laboratory has been prepared to provide the technical basis for demonstrating compliance with the performance objectives of DOE Order 5820.2A, Chapter 111.2 An analysis of the uncertainty incorporated into the assessment was performed which addressed the quantitative uncertainty in the data used by the models, the subjective uncertainty associated with the models used for assessing performance of the disposal facility and site, and the uncertainty in the models used for estimating dose and humanmore » exposure. The results of the uncertainty analysis were used to interpret results and to formulate conclusions about the performance assessment. This paper discusses the approach taken in analyzing the uncertainty in the performance assessment and the role of uncertainty in performance assessment.« less
Conmy, Robyn N; Coble, Paula G; Farr, James; Wood, A Michelle; Lee, Kenneth; Pegau, W Scott; Walsh, Ian D; Koch, Corey R; Abercrombie, Mary I; Miles, M Scott; Lewis, Marlon R; Ryan, Scott A; Robinson, Brian J; King, Thomas L; Kelble, Christopher R; Lacoste, Jordanna
2014-01-01
In situ fluorometers were deployed during the Deepwater Horizon (DWH) Gulf of Mexico oil spill to track the subsea oil plume. Uncertainties regarding instrument specifications and capabilities necessitated performance testing of sensors exposed to simulated, dispersed oil plumes. Dynamic ranges of the Chelsea Technologies Group AQUAtracka, Turner Designs Cyclops, Satlantic SUNA and WET Labs, Inc. ECO, exposed to fresh and artificially weathered crude oil, were determined. Sensors were standardized against known oil volumes and total petroleum hydrocarbons and benzene-toluene-ethylbenzene-xylene measurements-both collected during spills, providing oil estimates during wave tank dilution experiments. All sensors estimated oil concentrations down to 300 ppb oil, refuting previous reports. Sensor performance results assist interpretation of DWH oil spill data and formulating future protocols.
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Plurality of Type A evaluations of uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty
NASA Astrophysics Data System (ADS)
Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.
2012-12-01
Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Reiner, Bruce I
2018-04-01
Uncertainty in text-based medical reports has long been recognized as problematic, frequently resulting in misunderstanding and miscommunication. One strategy for addressing the negative clinical ramifications of report uncertainty would be the creation of a standardized methodology for characterizing and quantifying uncertainty language, which could provide both the report author and reader with context related to the perceived level of diagnostic confidence and accuracy. A number of computerized strategies could be employed in the creation of this analysis including string search, natural language processing and understanding, histogram analysis, topic modeling, and machine learning. The derived uncertainty data offers the potential to objectively analyze report uncertainty in real time and correlate with outcomes analysis for the purpose of context and user-specific decision support at the point of care, where intervention would have the greatest clinical impact.
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Myhre, Cathrine Lund; Platt, Stephen Matthew; Eckhardt, Sabine; Hermansen, Ove; Schmidbauer, Norbert; Mienert, Jurgen; Vadakkepuliyambatta, Sunil; Bauguitte, Stephane; Pitt, Joseph; Allen, Grant; Bower, Keith; O'Shea, Sebastian; Gallagher, Martin; Percival, Carl; Pyle, John; Cain, Michelle; Stohl, Andreas
2017-04-01
Methane stored in seabed reservoirs such as methane hydrates can reach the atmosphere in the form of bubbles or dissolved in water. Hydrates could destabilize with rising temperature further increasing greenhouse gas emissions in a warming climate. To assess the impact of oceanic emissions from the area west of Svalbard, where methane hydrates are abundant, we used measurements collected with a research aircraft (FAAM) and a ship (Helmer Hansen) during the Summer 2014, and for Zeppelin Observatory for the full year. We present a model-supported analysis of the atmospheric CH4 mixing ratios measured by the different platforms. To address uncertainty about where CH4 emissions actually occur, we explored three scenarios: areas with known seeps, a hydrate stability model and an ocean depth criterion. We then used a budget analysis and a Lagrangian particle dispersion model to compare measurements taken upwind and downwind of the potential CH4 emission areas. We found small differences between the CH4 mixing ratios measured upwind and downwind of the potential emission areas during the campaign. By taking into account measurement and sampling uncertainties and by determining the sensitivity of the measured mixing ratios to potential oceanic emissions, we provide upper limits for the CH4 fluxes. The CH4 flux during the campaign was small, with an upper limit of 2.5 nmol / m s in the stability model scenario. The Zeppelin Observatory data for 2014 suggests CH4 fluxes from the Svalbard continental platform below 0.2 Tg/yr . All estimates are in the lower range of values previously reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laitinen, T.; Dalla, S.; Huttunen-Heikinmaa, K.
2015-06-10
To understand the origin of Solar Energetic Particles (SEPs), we must study their injection time relative to other solar eruption manifestations. Traditionally the injection time is determined using the Velocity Dispersion Analysis (VDA) where a linear fit of the observed event onset times at 1 AU to the inverse velocities of SEPs is used to derive the injection time and path length of the first-arriving particles. VDA does not, however, take into account that the particles that produce a statistically observable onset at 1 AU have scattered in the interplanetary space. We use Monte Carlo test particle simulations of energeticmore » protons to study the effect of particle scattering on the observable SEP event onset above pre-event background, and consequently on VDA results. We find that the VDA results are sensitive to the properties of the pre-event and event particle spectra as well as SEP injection and scattering parameters. In particular, a VDA-obtained path length that is close to the nominal Parker spiral length does not imply that the VDA injection time is correct. We study the delay to the observed onset caused by scattering of the particles and derive a simple estimate for the delay time by using the rate of intensity increase at the SEP onset as a parameter. We apply the correction to a magnetically well-connected SEP event of 2000 June 10, and show it to improve both the path length and injection time estimates, while also increasing the error limits to better reflect the inherent uncertainties of VDA.« less
RR Lyrae Stars as High-Precision Standard Candles in the Mid-Infrared
NASA Astrophysics Data System (ADS)
Neeley, Jillian Rose
In this work, we provide the theoretical and empirical framework to establish RR Lyrae stars (RRL) as the anchor of a Population II distance scale. We present new theoretical period-luminosity-metallicity (PLZ) relations for RRL at Spitzer and WISE wavelengths. The PLZ relations were derived using nonlinear, time-dependent convective hydrodynamical models for a broad range in metal abundances (Z = 0.0001 to 0.0198). We also compare our theoretical relations to empirical relations derived from RRL in the field. Our theoretical PLZ relations were combined with multi-wavelength observations to simultaneously fit the distance modulus and extinction of each individual Galactic RRL in our sample. The results are consistent with trigonometric parallax measurements from the Gaia mission's first data release. This analysis has shown that when considering a sample covering a typical range of iron abundances for RRL, the metallicity spread introduces a dispersion in the PL relation on the order of 0.13 mag. However, if this metallicity component is accounted for in a PLZ relation, the dispersion is reduced to 0.02 mag at MIR wavelengths. On the empirical side, we present the analysis of five clusters from the Carnegie RR Lyrae Program (CRRP) sample (M4, NGC 3201, M5, M15, and M14). M4, the nearest one of the most well studied clusters, was used as a test case to develop a new data analysis pipeline for CRRP. Following the analysis of the five clusters, the resulting calibration PL relations are M[3.6] = -2.424 +/- 0.079 log P -1.205 +/- 0.057 and M [4.5] = -2.245 +/- 0.076 - 1.225 +/- 0.057. The slope of the PL relations was determined from the weighted average of the cluster results, and the zero point was fixed using five Galactic RRL with geometric parallaxes measured by Hubble Space Telescope. The dispersion of the RRL around the PL relations ranges from 0.05 mag in M4 to 0.3 mag in M14. The resulting band-averaged distance moduli for the five clusters agree well with results in the literature. The systematic uncertainty will be greatly reduced when parallaxes of more stars become available from the Gaia mission, and we are able to use the full CRRP sample of 55 Galactic RRL to calibrate the relation.
Measuring Cosmological Parameters with Photometrically Classified Pan-STARRS Supernovae
NASA Astrophysics Data System (ADS)
Jones, David; Scolnic, Daniel; Riess, Adam; Rest, Armin; Kirshner, Robert; Berger, Edo; Kessler, Rick; Pan, Yen-Chen; Foley, Ryan; Chornock, Ryan; Ortega, Carolyn; Challis, Peter; Burgett, William; Chambers, Kenneth; Draper, Peter; Flewelling, Heather; Huber, Mark; Kaiser, Nick; Kudritzki, Rolf; Metcalfe, Nigel; Tonry, John; Wainscoat, Richard J.; Waters, Chris; Gall, E. E. E.; Kotak, Rubina; McCrum, Matt; Smartt, Stephen; Smith, Ken
2018-01-01
We use nearly 1,200 supernovae (SNe) from Pan-STARRS and ~200 low-z (z < 0.1) SNe Ia to measure cosmological parameters. Though most of these SNe lack spectroscopic classifications, in a previous paper we demonstrated that photometrically classified SNe can still be used to infer unbiased cosmological parameters by using a Bayesian methodology that marginalizes over core-collapse (CC) SN contamination. Our sample contains nearly twice as many SNe as the largest previous compilation of SNe Ia. Combining SNe with Cosmic Microwave Background (CMB) constraints from the Planck satellite, we measure the dark energy equation of state parameter w to be -0.986±0.058 (stat+sys). If we allow w to evolve with redshift as w(a) = w0 + wa(1-a), we find w0 = -0.923±0.148 and wa = -0.404±0.797. These results are consistent with measurements of cosmological parameters from the JLA and from a new analysis of 1049 spectroscopically confirmed SNe Ia (Scolnic et al. 2017). We try four different photometric classification priors for Pan-STARRS SNe and two alternate ways of modeling the CC SN contamination, finding that none of these variants gives a w that differs by more than 1% from the baseline measurement. The systematic uncertainty on w due to marginalizing over the CC SN contamination, σwCC = 0.019, is approximately equal to the photometric calibration uncertainty and is lower than the systematic uncertainty in the SN\\,Ia dispersion model (σwdisp = 0.024). Our data provide one of the best current constraints on w, demonstrating that samples with ~5% CC SN contamination can give competitive cosmological constraints when the contaminating distribution is marginalized over in a Bayesian framework.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Costa, Antonio; Sandri, Laura; Rouwet, Dmtri; Tonini, Roberto; Macedonio, Giovanni; Marzocchi, Warner
2015-04-01
Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at mitigating the risk posed by volcanic activity at different time scales. The definition of the space-time window for PVHA is related to the kind of risk mitigation actions that are under consideration. Short temporal intervals (days to weeks) are important for short-term risk mitigation actions like the evacuation of a volcanic area. During volcanic unrest episodes or eruptions, it is of primary importance to produce short-term tephra fallout forecast, and frequently update it to account for the rapidly evolving situation. This information is obviously crucial for crisis management, since tephra may heavily affect building stability, public health, transportations and evacuation routes (airports, trains, road traffic) and lifelines (electric power supply). In this study, we propose a methodology named BET_VHst (Selva et al. 2014) for short-term PVHA of volcanic tephra dispersal based on automatic interpretation of measures from the monitoring system and physical models of tephra dispersal from all possible vent positions and eruptive sizes based on frequently updated meteorological forecasts. The large uncertainty at all the steps required for the analysis, both aleatory and epistemic, is treated by means of Bayesian inference and statistical mixing of long- and short-term analyses. The BET_VHst model is here presented through its implementation during two exercises organized for volcanoes in the Neapolitan area: MESIMEX for Mt. Vesuvius, and VUELCO for Campi Flegrei. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252
Hunt, Randall J.
2012-01-01
Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Verdel, Thierry
2017-04-01
Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.
Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator
NASA Astrophysics Data System (ADS)
Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.
2012-09-01
This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.
2012-01-01
There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.
A Bayesian method to rank different model forecasts of the same volcanic ash cloud: Chapter 24
Denlinger, Roger P.; Webley, P.; Mastin, Larry G.; Schwaiger, Hans F.
2012-01-01
Volcanic eruptions often spew fine ash high into the atmosphere, where it is carried downwind, forming long ash clouds that disrupt air traffic and pose a hazard to air travel. To mitigate such hazards, the community studying ash hazards must assess risk of ash ingestion for any flight path and provide robust and accurate forecasts of volcanic ash dispersal. We provide a quantitative and objective method to evaluate the efficacy of ash dispersal estimates from different models, using Bayes theorem to assess the predictions that each model makes about ash dispersal. We incorporate model and measurement uncertainty and produce a posterior probability for model input parameters. The integral of the posterior over all possible combinations of model inputs determines the evidence for each model and is used to compare models. We compare two different types of transport models, an Eulerian model (Ash3d) and a Langrangian model (PUFF), as applied to the 2010 eruptions of Eyjafjallajökull volcano in Iceland. The evidence for each model benefits from common physical characteristics of ash dispersal from an eruption column and provides a measure of how well each model forecasts cloud transport. Given the complexity of the wind fields, we find that the differences between these models depend upon the differences in the way the models disperse ash into the wind from the source plume. With continued observation, the accuracy of the estimates made by each model increases, increasing the efficacy of each model’s ability to simulate ash dispersal.
Din, Ghiyas Ud; Chughtai, Imran Rafiq; Inayat, Mansoor Hameed; Khan, Iqbal Hussain
2008-12-01
Axial dispersion, holdup and slip velocity of dispersed phase have been investigated for a range of dispersed and continuous phase superficial velocities in a pulsed sieve plate extraction column using radiotracer residence time distribution (RTD) analysis. Axial dispersion model (ADM) was used to simulate the hydrodynamics of the system. It has been observed that increase in dispersed phase superficial velocity results in a decrease in its axial dispersion and increase in its slip velocity while its holdup increases till a maximum asymptotic value is achieved. An increase in superficial velocity of continuous phase increases the axial dispersion and holdup of dispersed phase until a maximum value is obtained, while slip velocity of dispersed phase is found to decrease in the beginning and then it increases with increase in superficial velocity of continuous phase.
Facility Measurement Uncertainty Analysis at NASA GRC
NASA Technical Reports Server (NTRS)
Stephens, Julia; Hubbard, Erin
2016-01-01
This presentation provides and overview of the measurement uncertainty analysis currently being implemented in various facilities at NASA GRC. This presentation includes examples pertinent to the turbine engine community (mass flow and fan efficiency calculation uncertainties.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.
2008-01-01
This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
A multiphysical ensemble system of numerical snow modelling
NASA Astrophysics Data System (ADS)
Lafaysse, Matthieu; Cluzet, Bertrand; Dumont, Marie; Lejeune, Yves; Vionnet, Vincent; Morin, Samuel
2017-05-01
Physically based multilayer snowpack models suffer from various modelling errors. To represent these errors, we built the new multiphysical ensemble system ESCROC (Ensemble System Crocus) by implementing new representations of different physical processes in the deterministic coupled multilayer ground/snowpack model SURFEX/ISBA/Crocus. This ensemble was driven and evaluated at Col de Porte (1325 m a.s.l., French alps) over 18 years with a high-quality meteorological and snow data set. A total number of 7776 simulations were evaluated separately, accounting for the uncertainties of evaluation data. The ability of the ensemble to capture the uncertainty associated to modelling errors is assessed for snow depth, snow water equivalent, bulk density, albedo and surface temperature. Different sub-ensembles of the ESCROC system were studied with probabilistic tools to compare their performance. Results show that optimal members of the ESCROC system are able to explain more than half of the total simulation errors. Integrating members with biases exceeding the range corresponding to observational uncertainty is necessary to obtain an optimal dispersion, but this issue can also be a consequence of the fact that meteorological forcing uncertainties were not accounted for. The ESCROC system promises the integration of numerical snow-modelling errors in ensemble forecasting and ensemble assimilation systems in support of avalanche hazard forecasting and other snowpack-modelling applications.
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.
Thermal niche estimators and the capability of poor dispersal species to cope with climate change
Sánchez-Fernández, David; Rizzo, Valeria; Cieslak, Alexandra; Faille, Arnaud; Fresneda, Javier; Ribera, Ignacio
2016-01-01
For management strategies in the context of global warming, accurate predictions of species response are mandatory. However, to date most predictions are based on niche (bioclimatic) models that usually overlook biotic interactions, behavioral adjustments or adaptive evolution, and assume that species can disperse freely without constraints. The deep subterranean environment minimises these uncertainties, as it is simple, homogeneous and with constant environmental conditions. It is thus an ideal model system to study the effect of global change in species with poor dispersal capabilities. We assess the potential fate of a lineage of troglobitic beetles under global change predictions using different approaches to estimate their thermal niche: bioclimatic models, rates of thermal niche change estimated from a molecular phylogeny, and data from physiological studies. Using bioclimatic models, at most 60% of the species were predicted to have suitable conditions in 2080. Considering the rates of thermal niche change did not improve this prediction. However, physiological data suggest that subterranean species have a broad thermal tolerance, allowing them to stand temperatures never experienced through their evolutionary history. These results stress the need of experimental approaches to assess the capability of poor dispersal species to cope with temperatures outside those they currently experience. PMID:26983802
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
NASA Astrophysics Data System (ADS)
Newbury, Dale E.; Ritchie, Nicholas W. M.
2014-09-01
Quantitative electron-excited x-ray microanalysis by scanning electron microscopy/silicon drift detector energy dispersive x-ray spectrometry (SEM/SDD-EDS) is capable of achieving high accuracy and high precision equivalent to that of the high spectral resolution wavelength dispersive x-ray spectrometer even when severe peak interference occurs. The throughput of the SDD-EDS enables high count spectra to be measured that are stable in calibration and resolution (peak shape) across the full deadtime range. With this high spectral stability, multiple linear least squares peak fitting is successful for separating overlapping peaks and spectral background. Careful specimen preparation is necessary to remove topography on unknowns and standards. The standards-based matrix correction procedure embedded in the NIST DTSA-II software engine returns quantitative results supported by a complete error budget, including estimates of the uncertainties from measurement statistics and from the physical basis of the matrix corrections. NIST DTSA-II is available free for Java-platforms at: http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html).
de Oliveira, Gabriel Barros; de Castro Gomes Vieira, Carolyne Menezes; Orlando, Ricardo Mathias; Faria, Adriana Ferreira
2017-10-15
This work involved the optimization and validation of a method, according to Directive 2002/657/EC and the Analytical Quality Assurance Manual of Ministério da Agricultura, Pecuária e Abastecimento, Brazil, for simultaneous extraction and determination of fumonisins B1 and B2 in maize. The extraction procedure was based on a matrix solid phase dispersion approach, the optimization of which employed a sequence of different factorial designs. A liquid chromatography-tandem mass spectrometry method was developed for determining these analytes using the selected reaction monitoring mode. The optimized method employed only 1g of silica gel for dispersion and elution with 70% ammonium formate aqueous buffer (50mmolL -1 , pH 9), representing a simple, cheap and chemically friendly sample preparation method. Trueness (recoveries: 86-106%), precision (RSD ≤19%), decision limits, detection capabilities and measurement uncertainties were calculated for the validated method. The method scope was expanded to popcorn kernels, white maize kernels and yellow maize grits. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Bern, A.M.; Lowers, H.A.; Meeker, G.P.; Rosati, J.A.
2009-01-01
The collapse of the World Trade Center Towers on September 11, 2001, sent dust and debris across much of Manhattan and in the surrounding areas. Indoor and outdoor dust samples were collected and characterized by U.S. Geological Survey (USGS) scientists using scanning electron microscopy with energy-dispersive spectrometry (SEM/EDS). From this characterization, the U.S. Environmental Protection Agency and USGS developed a particulate screening method to determine the presence of residual World Trade Center dust in the indoor environment using slag wool as a primary "signature". The method describes a procedure that includes splitting, ashing, and sieving of collected dust. From one split, a 10 mg/mL dust/ isopropanol suspension was prepared and 10-30 ??L aliquots of the suspension placed on an SEM substrate. Analyses were performed using SEM/EDS manual point counting for slag wool fibers. Poisson regression was used to identify some of the sources of uncertainty, which are directly related to the small number of fibers present on each sample stub. Preliminary results indicate that the procedure is promising for screening urban background dust for the presence of WTC dust. Consistent sample preparation of reference materials and samples must be performed by each laboratory wishing to use this method to obtain meaningful and accurate results. ?? 2009 American Chemical Society.
NASA Astrophysics Data System (ADS)
Zhu, Q.; Xu, Y. P.; Gu, H.
2014-12-01
Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.
CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrer, R.; Rhodes, J.; Smith, K.
2012-07-01
The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)
2011-09-30
community use for ROMS is biogeochemisty: chemical cycles, water quality, blooms , micro-nutrients, larval dispersal, biome transitions, and coupling to...J.C. McWilliams, X. Capet, and J. Kurian, 2010: Heat balance and eddies in the Peru- Chile Current System. Climate Dynamics, 37, in press. doi10.1007
Femoral anatomical frame: assessment of various definitions.
Della Croce, U; Camomilla, V; Leardini, A; Cappozzo, A
2003-06-01
The reliability of the estimate of joint kinematic variables and the relevant functional interpretation are affected by the uncertainty with which bony anatomical landmarks and underlying bony segment anatomical frames are determined. When a stereo-photogrammetric system is used for in vivo studies, minimising and compensating for this uncertainty is crucial. This paper deals with the propagation of the errors associated with the location of both internal and palpable femoral anatomical landmarks to the estimation of the orientation of the femoral anatomical frame and to the knee joint angles during movement. Given eight anatomical landmarks, and the precision with which they can be identified experimentally, 12 different rules were defined for the construction of the anatomical frame and submitted to comparative assessment. Results showed that using more than three landmarks allows for more repeatable anatomical frame orientation and knee joint kinematics estimation. Novel rules are proposed that use optimization algorithms. On the average, the femoral frame orientation dispersion had a standard deviation of 2, 2.5 and 1.5 degrees for the frontal, transverse, and sagittal plane, respectively. However, a proper choice of the relevant construction rule allowed for a reduction of these inaccuracies in selected planes to 1 degrees rms. The dispersion of the knee adduction-abduction and internal-external rotation angles could also be limited to 1 degrees rms irrespective of the flexion angle value.
Exchange across the sediment-water interface quantified from porewater radon profiles
NASA Astrophysics Data System (ADS)
Cook, Peter G.; Rodellas, Valentí; Andrisoa, Aladin; Stieglitz, Thomas C.
2018-04-01
Water recirculation through permeable sediments induced by wave action, tidal pumping and currents enhances the exchange of solutes and fine particles between sediments and overlying waters, and can be an important hydro-biogeochemical process. In shallow water, most of the recirculation is likely to be driven by the interaction of wave-driven oscillatory flows with bottom topography which can induce pressure fluctuations at the sediment-water interface on very short timescales. Tracer-based methods provide the most reliable means for characterizing this short-timescale exchange. However, the commonly applied approaches only provide a direct measure of the tracer flux. Estimating water fluxes requires characterizing the tracer concentration in discharging porewater; this implies collecting porewater samples at shallow depths (usually a few mm, depending on the hydrodynamic dispersivity), which is very difficult with commonly used techniques. In this study, we simulate observed vertical profiles of radon concentration beneath shallow coastal lagoons using a simple water recirculation model that allows us to estimate water exchange fluxes as a function of depth below the sediment-water interface. Estimated water fluxes at the sediment water interface at our site were 0.18-0.25 m/day, with fluxes decreasing exponentially with depth. Uncertainty in dispersivity is the greatest source of error in exchange flux, and results in an uncertainty of approximately a factor-of-five.
GEMINI/GMOS SPECTROSCOPY OF 26 STRONG-LENSING-SELECTED GALAXY CLUSTER CORES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayliss, Matthew B.; Gladders, Michael D.; Koester, Benjamin P.
2011-03-15
We present results from a spectroscopic program targeting 26 strong-lensing cluster cores that were visually identified in the Sloan Digital Sky Survey (SDSS) and the Second Red-Sequence Cluster Survey (RCS-2). The 26 galaxy cluster lenses span a redshift range of 0.2 < z < 0.65, and our spectroscopy reveals 69 unique background sources with redshifts as high as z = 5.200. We also identify redshifts for 262 cluster member galaxies and measure the velocity dispersions and dynamical masses for 18 clusters where we have redshifts for N {>=} 10 cluster member galaxies. We account for the expected biases in dynamicalmore » masses of strong-lensing-selected clusters as predicted by results from numerical simulations and discuss possible sources of bias in our observations. The median dynamical mass of the 18 clusters with N {>=} 10 spectroscopic cluster members is M {sub Vir} = 7.84 x 10{sup 14} M {sub sun} h {sup -1} {sub 0.7}, which is somewhat higher than predictions for strong-lensing-selected clusters in simulations. The disagreement is not significant considering the large uncertainty in our dynamical data, systematic uncertainties in the velocity dispersion calibration, and limitations of the theoretical modeling. Nevertheless our study represents an important first step toward characterizing large samples of clusters that are identified in a systematic way as systems exhibiting dramatic strong-lensing features.« less
Newbury, Dale E; Ritchie, Nicholas W M
2015-10-01
A scanning electron microscope with a silicon drift detector energy-dispersive X-ray spectrometer (SEM/SDD-EDS) was used to analyze materials containing the low atomic number elements B, C, N, O, and F achieving a high degree of accuracy. Nearly all results fell well within an uncertainty envelope of ±5% relative (where relative uncertainty (%)=[(measured-ideal)/ideal]×100%). Quantification was performed with the standards-based "k-ratio" method with matrix corrections calculated based on the Pouchou and Pichoir expression for the ionization depth distribution function, as implemented in the NIST DTSA-II EDS software platform. The analytical strategy that was followed involved collection of high count (>2.5 million counts from 100 eV to the incident beam energy) spectra measured with a conservative input count rate that restricted the deadtime to ~10% to minimize coincidence effects. Standards employed included pure elements and simple compounds. A 10 keV beam was employed to excite the K- and L-shell X-rays of intermediate and high atomic number elements with excitation energies above 3 keV, e.g., the Fe K-family, while a 5 keV beam was used for analyses of elements with excitation energies below 3 keV, e.g., the Mo L-family.
Operational evaluation of the RLINE dispersion model for studies of traffic-related air pollutants
NASA Astrophysics Data System (ADS)
Milando, Chad W.; Batterman, Stuart A.
2018-06-01
Exposure to traffic-related air pollutants (TRAP) remains a key public health issue, and improved exposure measures are needed to support health impact and epidemiologic studies and inform regulatory responses. The recently developed Research LINE source model (RLINE), a Gaussian line source dispersion model, has been used in several epidemiologic studies of TRAP exposure, but evaluations of RLINE's performance in such applications have been limited. This study provides an operational evaluation of RLINE in which predictions of NOx, CO and PM2.5 are compared to observations at air quality monitoring stations located near high traffic roads in Detroit, MI. For CO and NOx, model performance was best at sites close to major roads, during downwind conditions, during weekdays, and during certain seasons. For PM2.5, the ability to discern local and particularly the traffic-related portion was limited, a result of high background levels, the sparseness of the monitoring network, and large uncertainties for certain processes (e.g., formation of secondary aerosols) and non-mobile sources (e.g., area, fugitive). Overall, RLINE's performance in near-road environments suggests its usefulness for estimating spatially- and temporally-resolved exposures. The study highlights considerations relevant to health impact and epidemiologic applications, including the importance of selecting appropriate pollutants, using appropriate monitoring approaches, considering prevailing wind directions during study design, and accounting for uncertainty.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, Keith
The measurement of photovoltaic (PV) performance with respect to reference conditions requires measuring current versus voltage for a given tabular reference spectrum, junction temperature, and total irradiance. This report presents the procedures implemented by the PV Cell and Module Performance Characterization Group at the National Renewable Energy Laboratory (NREL) to achieve the lowest practical uncertainty. A rigorous uncertainty analysis of these procedures is presented, which follows the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement. This uncertainty analysis is required for the team’s laboratory accreditation under ISO standard 17025, “General Requirements for the Competence ofmore » Testing and Calibration Laboratories.” The report also discusses additional areas where the uncertainty can be reduced.« less
Hansen, Andreas; Bannwarth, Christoph; Grimme, Stefan; Petrović, Predrag; Werlé, Christophe; Djukic, Jean-Pierre
2014-10-01
Reliable thermochemical measurements and theoretical predictions for reactions involving large transition metal complexes in which long-range intramolecular London dispersion interactions contribute significantly to their stabilization are still a challenge, particularly for reactions in solution. As an illustrative and chemically important example, two reactions are investigated where a large dipalladium complex is quenched by bulky phosphane ligands (triphenylphosphane and tricyclohexylphosphane). Reaction enthalpies and Gibbs free energies were measured by isotherm titration calorimetry (ITC) and theoretically 'back-corrected' to yield 0 K gas-phase reaction energies (ΔE). It is shown that the Gibbs free solvation energy calculated with continuum models represents the largest source of error in theoretical thermochemistry protocols. The ('back-corrected') experimental reaction energies were used to benchmark (dispersion-corrected) density functional and wave function theory methods. Particularly, we investigated whether the atom-pairwise D3 dispersion correction is also accurate for transition metal chemistry, and how accurately recently developed local coupled-cluster methods describe the important long-range electron correlation contributions. Both, modern dispersion-corrected density functions (e.g., PW6B95-D3(BJ) or B3LYP-NL), as well as the now possible DLPNO-CCSD(T) calculations, are within the 'experimental' gas phase reference value. The remaining uncertainties of 2-3 kcal mol(-1) can be essentially attributed to the solvation models. Hence, the future for accurate theoretical thermochemistry of large transition metal reactions in solution is very promising.
Validating data analysis of broadband laser ranging
NASA Astrophysics Data System (ADS)
Rhodes, M.; Catenacci, J.; Howard, M.; La Lone, B.; Kostinski, N.; Perry, D.; Bennett, C.; Patterson, J.
2018-03-01
Broadband laser ranging combines spectral interferometry and a dispersive Fourier transform to achieve high-repetition-rate measurements of the position of a moving surface. Telecommunications fiber is a convenient tool for generating the large linear dispersions required for a dispersive Fourier transform, but standard fiber also has higher-order dispersion that distorts the Fourier transform. Imperfections in the dispersive Fourier transform significantly complicate the ranging signal and must be dealt with to make high-precision measurements. We describe in detail an analysis process for interpreting ranging data when standard telecommunications fiber is used to perform an imperfect dispersive Fourier transform. This analysis process is experimentally validated over a 27-cm scan of static positions, showing an accuracy of 50 μm and a root-mean-square precision of 4.7 μm.
Ascent trajectory dispersion analysis for WTR heads-up space shuttle trajectory
NASA Technical Reports Server (NTRS)
1986-01-01
The results of a Space Transportation System ascent trajectory dispersion analysis are discussed. The purpose is to provide critical trajectory parameter values for assessing the Space Shuttle in a heads-up configuration launched from the Western Test Range (STR). This analysis was conducted using a trajectory profile based on a launch from the WTR in December. The analysis consisted of the following steps: (1) nominal trajectories were simulated under the conditions as specified by baseline reference mission guidelines; (2) dispersion trajectories were simulated using predetermined parametric variations; (3) requirements for a system-related composite trajectory were determined by a root-sum-square (RSS) analysis of the positive deviations between values of the aerodynamic heating indicator (AHI) generated by the dispersion and nominal trajectories; (4) using the RSS assessment as a guideline, the system related composite trajectory was simulated by combinations of dispersion parameters which represented major contributors; (5) an assessment of environmental perturbations via a RSS analysis was made by the combination of plus or minus 2 sigma atmospheric density variation and 95% directional design wind dispersions; (6) maximum aerodynamic heating trajectories were simulated by variation of dispersion parameters which would emulate the summation of the system-related RSS and environmental RSS values of AHI. The maximum aerodynamic heating trajectories were simulated consistent with the directional winds used in the environmental analysis.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.
Scattered light in the IUE spectra of Epsilon Aurigae
NASA Technical Reports Server (NTRS)
Aitner, B.; Chapman, R. D.; Kondo, Y.; Stencel, R. E.
1985-01-01
As a result of this work it was found that light scattered from the longer wavelengths constitutes a small but non-negligible, wavelength and time dependent fraction of the measured flux in the far UV. The reality of the UV excess has not been unambigiously ruled out. However, it is noted that there are still uncertainties in the assumed scattering profile. New measurements of the scattering properties of the cross disperser grating are planned in order to verify the results of Mount and Fastie and extend the wavelength coverage into the far wings of the profile. The results of these measurements will no doubt reduce some of these uncertainties. For the present, it is felt that the BCH approach is a significant improvement over the methods heretofore available for the treatment of scattered light in IUE spectra.
Forward Compton scattering with weak neutral current: Constraints from sum rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorchtein, Mikhail; Zhang, Xilin
2015-06-09
We generalize forward real Compton amplitude to the case of the interference of the electromagnetic and weak neutral current, formulate a low-energy theorem, relate the new amplitudes to the interference structure functions and obtain a new set of sum rules. Furthermore, we address a possible new sum rule that relates the product of the axial charge and magnetic moment of the nucleon to the 0th moment of the structure function g5(ν, 0). For the dispersive γ Z-box correction to the proton’s weak charge, the application of the GDH sum rule allows us to reduce the uncertainty due to resonance contributionsmore » by a factor of two. Finally, the finite energy sum rule helps addressing the uncertainty in that calculation due to possible duality violations.« less
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Uncertainty in Operational Atmospheric Analyses and Re-Analyses
NASA Astrophysics Data System (ADS)
Langland, R.; Maue, R. N.
2016-12-01
This talk will describe uncertainty in atmospheric analyses of wind and temperature produced by operational forecast models and in re-analysis products. Because the "true" atmospheric state cannot be precisely quantified, there is necessarily error in every atmospheric analysis, and this error can be estimated by computing differences ( variance and bias) between analysis products produced at various centers (e.g., ECMWF, NCEP, U.S Navy, etc.) that use independent data assimilation procedures, somewhat different sets of atmospheric observations and forecast models with different resolutions, dynamical equations, and physical parameterizations. These estimates of analysis uncertainty provide a useful proxy to actual analysis error. For this study, we use a unique multi-year and multi-model data archive developed at NRL-Monterey. It will be shown that current uncertainty in atmospheric analyses is closely correlated with the geographic distribution of assimilated in-situ atmospheric observations, especially those provided by high-accuracy radiosonde and commercial aircraft observations. The lowest atmospheric analysis uncertainty is found over North America, Europe and Eastern Asia, which have the largest numbers of radiosonde and commercial aircraft observations. Analysis uncertainty is substantially larger (by factors of two to three times) in most of the Southern hemisphere, the North Pacific ocean, and under-developed nations of Africa and South America where there are few radiosonde or commercial aircraft data. It appears that in regions where atmospheric analyses depend primarily on satellite radiance observations, analysis uncertainty of both temperature and wind remains relatively high compared to values found over North America and Europe.
Framing of Uncertainty in Scientific Publications: Towards Recommendations for Decision Support
NASA Astrophysics Data System (ADS)
Guillaume, J. H. A.; Helgeson, C.; Elsawah, S.; Jakeman, A. J.; Kummu, M.
2016-12-01
Uncertainty is recognised as an essential issue in environmental decision making and decision support. As modellers, we notably use a variety of tools and techniques within an analysis, for example related to uncertainty quantification and model validation. We also address uncertainty by how we present results. For example, experienced modellers are careful to distinguish robust conclusions from those that need further work, and the precision of quantitative results is tailored to their accuracy. In doing so, the modeller frames how uncertainty should be interpreted by their audience. This is an area which extends beyond modelling to fields such as philosophy of science, semantics, discourse analysis, intercultural communication and rhetoric. We propose that framing of uncertainty deserves greater attention in the context of decision support, and that there are opportunities in this area for fundamental research, synthesis and knowledge transfer, development of teaching curricula, and significant advances in managing uncertainty in decision making. This presentation reports preliminary results of a study of framing practices. Specifically, we analyse the framing of uncertainty that is visible in the abstracts from a corpus of scientific articles. We do this through textual analysis of the content and structure of those abstracts. Each finding that appears in an abstract is classified according to the uncertainty framing approach used, using a classification scheme that was iteratively revised based on reflection and comparison amongst three coders. This analysis indicates how frequently the different framing approaches are used, and provides initial insights into relationships between frames, how the frames relate to interpretation of uncertainty, and how rhetorical devices are used by modellers to communicate uncertainty in their work. We propose initial hypotheses for how the resulting insights might influence decision support, and help advance decision making to better address uncertainty.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Local and regional smoke impacts from prescribed fires
NASA Astrophysics Data System (ADS)
Price, Owen F.; Horsey, Bronwyn; Jiang, Ningbo
2016-10-01
Smoke from wildfires poses a significant threat to affected communities. Prescribed burning is conducted to reduce the extent and potential damage of wildfires, but produces its own smoke threat. Planners of prescribed fires model the likely dispersion of smoke to help manage the impacts on local communities. Significant uncertainty remains about the actual smoke impact from prescribed fires, especially near the fire, and the accuracy of smoke dispersal models. To address this uncertainty, a detailed study of smoke dispersal was conducted for one small (52 ha) and one large (700 ha) prescribed fire near Appin in New South Wales, Australia, through the use of stationary and handheld pollution monitors, visual observations and rain radar data, and by comparing observations to predictions from an atmospheric dispersion model. The 52 ha fire produced a smoke plume about 800 m high and 9 km long. Particle concentrations (PM2.5) reached very high peak values (> 400 µg m-3) and high 24 h average values (> 100 µg m-3) at several locations next to or within ˜ 500 m downwind from the fire, but low levels elsewhere. The 700 ha fire produced a much larger plume, peaking at ˜ 2000 m altitude and affecting downwind areas up to 14 km away. Both peak and 24 h average PM2.5 values near the fire were lower than for the 52 ha fire, but this may be because the monitoring locations were further away from the fire. Some lofted smoke spread north against the ground-level wind direction. Smoke from this fire collapsed to the ground during the night at different times in different locations. Although it is hard to attribute particle concentrations definitively to smoke, it seems that the collapsed plume affected a huge area including the towns of Wollongong, Bargo, Oakdale, Camden and Campbelltown (˜ 1200 km2). PM2.5 concentrations up to 169 µg m-3 were recorded on the morning following the fire. The atmospheric dispersion model accurately predicted the general behaviour of both plumes in the early phases of the fires, but was poor at predicting fine-scale variation in particulate concentrations (e.g. places 500 m from the fire). The correlation between predicted and observed varied between 0 and 0.87 depending on location. The model also completely failed to predict the night-time collapse of the plume from the 700 ha fire. This study provides a preliminary insight into the potential for large impacts from prescribed fire smoke to NSW communities and the need for increased accuracy in smoke dispersion modelling. More research is needed to better understand when and why such impacts might occur and provide better predictions of pollution risk.
A pitfall of muting and removing bad traces in surface-wave analysis
NASA Astrophysics Data System (ADS)
Hu, Yue; Xia, Jianghai; Mi, Binbin; Cheng, Feng; Shen, Chao
2018-06-01
Multi-channel analysis of surface/Love wave (MASW/MALW) has been widely used to construct the shallow shear (S)-wave velocity profile. The key step in surface-wave analysis is to generate accurate dispersion energy and pick the dispersive curves for inversion along the peaks of dispersion energy at different frequencies. In near-surface surface-wave acquisition, bad traces are very common and inevitable due to the imperfections in the recording instruments or others. The existence of bad traces will cause some artifacts in the dispersion energy image. To avoid the interference of bad traces on the surface-wave analysis, the bad traces should be alternatively muted (zeroed) or removed (deleted) from the raw surface-wave data before dispersion measurement. Most geophysicists and civil engineers, however, are not aware of the differences and implications between muting and removing of bad traces in surface-wave analysis. A synthetic test and a real-world example demonstrate the potential pitfalls of applying muting and removing on bad traces when using different dispersion-imaging methods. We implement muting and removing on bad traces respectively before dispersion measurement, and compare the influence of the two operations on three dispersion-imaging methods, high-resolution linear Radon transform (HRLRT), f-k transformation, and phase shift method. Results indicate that when using the HRLRT to generate the dispersive energy, muting bad traces will cause an even more complicated and discontinuous dispersive energy. When f-k transformation is utilized to conduct dispersive analysis, bad traces should be muted instead of removed to generate an accurate dispersion image to avoid the uneven sampling problem in the Fourier transform. As for the phase shift method, the difference between the two operations is slight, but we suggest that removal should be chosen because the integral for the phase-shift operator of the zeroed traces would bring in the sloped aliasing. This study provides a pre-process guidance for the real-world surface-wave data processing when the recorded shot gather contains inevitable bad traces.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
AN IMPROVEMENT TO THE MOUSE COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM
The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cast or risk analysis equations. It was especially intended for use by individuals with l...
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Uncertainty analysis of diffuse-gray radiation enclosure problems: A hypersensitive case study
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio; Hodge, B. K.; Steele, W. Glenn
1993-01-01
An uncertainty analysis of diffuse-gray enclosure problems is presented. The genesis was a diffuse-gray enclosure problem which proved to be hypersensitive to the specification of view factors. This genesis is discussed in some detail. The uncertainty analysis is presented for the general diffuse-gray enclosure problem and applied to the hypersensitive case study. It was found that the hypersensitivity could be greatly reduced by enforcing both closure and reciprocity for the view factors. The effects of uncertainties in the surface emissivities and temperatures are also investigated.
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
Methods for Estimating the Uncertainty in Emergy Table-Form Models
Emergy studies have suffered criticism due to the lack of uncertainty analysis and this shortcoming may have directly hindered the wider application and acceptance of this methodology. Recently, to fill this gap, the sources of uncertainty in emergy analysis were described and an...
NASA Astrophysics Data System (ADS)
Vervatis, Vassilios; De Mey, Pierre; Ayoub, Nadia; Kailas, Marios; Sofianos, Sarantis
2017-04-01
The project entitled Stochastic Coastal/Regional Uncertainty Modelling (SCRUM) aims at strengthening CMEMS in the areas of ocean uncertainty quantification, ensemble consistency verification and ensemble data assimilation. The project has been initiated by the University of Athens and LEGOS/CNRS research teams, in the framework of CMEMS Service Evolution. The work is based on stochastic modelling of ocean physics and biogeochemistry in the Bay of Biscay, on an identical sub-grid configuration of the IBI-MFC system in its latest CMEMS operational version V2. In a first step, we use a perturbed tendencies scheme to generate ensembles describing uncertainties in open ocean and on the shelf, focusing on upper ocean processes. In a second step, we introduce two methodologies (i.e. rank histograms and array modes) aimed at checking the consistency of the above ensembles with respect to TAC data and arrays. Preliminary results highlight that wind uncertainties dominate all other atmosphere-ocean sources of model errors. The ensemble spread in medium-range ensembles is approximately 0.01 m for SSH and 0.15 °C for SST, though these values vary depending on season and cross shelf regions. Ecosystem model uncertainties emerging from perturbations in physics appear to be moderately larger than those perturbing the concentration of the biogeochemical compartments, resulting in total chlorophyll spread at about 0.01 mg.m-3. First consistency results show that the model ensemble and the pseudo-ensemble of OSTIA (L4) observation SSTs appear to exhibit nonzero joint probabilities with each other since error vicinities overlap. Rank histograms show that the model ensemble is initially under-dispersive, though results improve in the context of seasonal-range ensembles.
Irreducible Uncertainty in Terrestrial Carbon Projections
NASA Astrophysics Data System (ADS)
Lovenduski, N. S.; Bonan, G. B.
2016-12-01
We quantify and isolate the sources of uncertainty in projections of carbon accumulation by the ocean and terrestrial biosphere over 2006-2100 using output from Earth System Models participating in the 5th Coupled Model Intercomparison Project. We consider three independent sources of uncertainty in our analysis of variance: (1) internal variability, driven by random, internal variations in the climate system, (2) emission scenario, driven by uncertainty in future radiative forcing, and (3) model structure, wherein different models produce different projections given the same emission scenario. Whereas uncertainty in projections of ocean carbon accumulation by 2100 is 100 Pg C and driven primarily by emission scenario, uncertainty in projections of terrestrial carbon accumulation by 2100 is 50% larger than that of the ocean, and driven primarily by model structure. This structural uncertainty is correlated with emission scenario: the variance associated with model structure is an order of magnitude larger under a business-as-usual scenario (RCP8.5) than a mitigation scenario (RCP2.6). In an effort to reduce this structural uncertainty, we apply various model weighting schemes to our analysis of variance in terrestrial carbon accumulation projections. The largest reductions in uncertainty are achieved when giving all the weight to a single model; here the uncertainty is of a similar magnitude to the ocean projections. Such an analysis suggests that this structural uncertainty is irreducible given current terrestrial model development efforts.
NASA Astrophysics Data System (ADS)
Veale, Melanie; Ma, Chung-Pei; Greene, Jenny E.; Thomas, Jens; Blakeslee, John P.; Walsh, Jonelle L.; Ito, Jennifer
2018-02-01
We measure the radial profiles of the stellar velocity dispersions, σ(R), for 90 early-type galaxies (ETGs) in the MASSIVE survey, a volume-limited integral-field spectroscopic (IFS) galaxy survey targeting all northern-sky ETGs with absolute K-band magnitude MK < -25.3 mag, or stellar mass M* ≳ 4 × 1011M⊙, within 108 Mpc. Our wide-field 107 arcsec × 107 arcsec IFS data cover radii as large as 40 kpc, for which we quantify separately the inner (2 kpc) and outer (20 kpc) logarithmic slopes γinner and γouter of σ(R). While γinner is mostly negative, of the 56 galaxies with sufficient radial coverage to determine γouter we find 36 per cent to have rising outer dispersion profiles, 30 per cent to be flat within the uncertainties and 34 per cent to be falling. The fraction of galaxies with rising outer profiles increases with M* and in denser galaxy environment, with 10 of the 11 most massive galaxies in our sample having flat or rising dispersion profiles. The strongest environmental correlations are with local density and halo mass, but a weaker correlation with large-scale density also exists. The average γouter is similar for brightest group galaxies, satellites and isolated galaxies in our sample. We find a clear positive correlation between the gradients of the outer dispersion profile and the gradients of the velocity kurtosis h4. Altogether, our kinematic results suggest that the increasing fraction of rising dispersion profiles in the most massive ETGs are caused (at least in part) by variations in the total mass profiles rather than in the velocity anisotropy alone.
NASA Astrophysics Data System (ADS)
Ciurean, R. L.; Glade, T.
2012-04-01
Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.
Uncertainty Analysis of Consequence Management (CM) Data Products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Brian D.; Eckert-Gallup, Aubrey Celia; Cochran, Lainy Dromgoole
The goal of this project is to develop and execute methods for characterizing uncertainty in data products that are deve loped and distributed by the DOE Consequence Management (CM) Program. A global approach to this problem is necessary because multiple sources of error and uncertainty from across the CM skill sets contribute to the ultimate p roduction of CM data products. This report presents the methods used to develop a probabilistic framework to characterize this uncertainty and provides results for an uncertainty analysis for a study scenario analyzed using this framework.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo-Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the AO, A I , So, and S2 Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.
Meyer, Veronika R
2003-09-01
Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.
NASA Astrophysics Data System (ADS)
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
NASA Technical Reports Server (NTRS)
Carpenter, Paul; Curreri, Peter A. (Technical Monitor)
2002-01-01
This course will cover practical applications of the energy-dispersive spectrometer (EDS) to x-ray microanalysis. Topics covered will include detector technology, advances in pulse processing, resolution and performance monitoring, detector modeling, peak deconvolution and fitting, qualitative and quantitative analysis, compositional mapping, and standards. An emphasis will be placed on use of the EDS for quantitative analysis, with discussion of typical problems encountered in the analysis of a wide range of materials and sample geometries.
Uncertainties in internal gas counting
NASA Astrophysics Data System (ADS)
Unterweger, M.; Johansson, L.; Karam, L.; Rodrigues, M.; Yunoki, A.
2015-06-01
The uncertainties in internal gas counting will be broken down into counting uncertainties and gas handling uncertainties. Counting statistics, spectrum analysis, and electronic uncertainties will be discussed with respect to the actual counting of the activity. The effects of the gas handling and quantities of counting and sample gases on the uncertainty in the determination of the activity will be included when describing the uncertainties arising in the sample preparation.
Taylor Dispersion Analysis as a promising tool for assessment of peptide-peptide interactions.
Høgstedt, Ulrich B; Schwach, Grégoire; van de Weert, Marco; Østergaard, Jesper
2016-10-10
Protein-protein and peptide-peptide (self-)interactions are of key importance in understanding the physiochemical behavior of proteins and peptides in solution. However, due to the small size of peptide molecules, characterization of these interactions is more challenging than for proteins. In this work, we show that protein-protein and peptide-peptide interactions can advantageously be investigated by measurement of the diffusion coefficient using Taylor Dispersion Analysis. Through comparison to Dynamic Light Scattering it was shown that Taylor Dispersion Analysis is well suited for the characterization of protein-protein interactions of solutions of α-lactalbumin and human serum albumin. The peptide-peptide interactions of three selected peptides were then investigated in a concentration range spanning from 0.5mg/ml up to 80mg/ml using Taylor Dispersion Analysis. The peptide-peptide interactions determination indicated that multibody interactions significantly affect the PPIs at concentration levels above 25mg/ml for the two charged peptides. Relative viscosity measurements, performed using the capillary based setup applied for Taylor Dispersion Analysis, showed that the viscosity of the peptide solutions increased with concentration. Our results indicate that a viscosity difference between run buffer and sample in Taylor Dispersion Analysis may result in overestimation of the measured diffusion coefficient. Thus, Taylor Dispersion Analysis provides a practical, but as yet primarily qualitative, approach to assessment of the colloidal stability of both peptide and protein formulations. Copyright © 2016 Elsevier B.V. All rights reserved.
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
10 CFR 436.24 - Uncertainty analyses.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle cost...
Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...
Estimation Of TMDLs And Margin Of Safety Under Conditions Of Uncertainty
In TMDL development, an adequate margin of safety (MOS) is required in the calculation process to provide a cushion needed because of uncertainties in the data and analysis. Current practices, however, rarely factor analysis' uncertainty in TMDL development and the MOS is largel...
To address uncertainty associated with the evaluation of vapor intrusion problems we are working on a three part strategy that includes: evaluation of uncertainty in model-based assessments; collection of field data and assessment of sites using EPA and state protocols.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eltoweissy, Mohamed Y.; Du, David H.C.; Gerla, Mario
Mission-Critical Networking (MCN) refers to networking for application domains where life or livelihood may be at risk. Typical application domains for MCN include critical infrastructure protection and operation, emergency and crisis intervention, healthcare services, and military operations. Such networking is essential for safety, security and economic vitality in our complex world characterized by uncertainty, heterogeneity, emergent behaviors, and the need for reliable and timely response. MCN comprise networking technology, infrastructures and services that may alleviate the risk and directly enable and enhance connectivity for mission-critical information exchange among diverse, widely dispersed, mobile users.
Measurement of H/D ratio and ion temperature on a HT-6M Tokamak
NASA Astrophysics Data System (ADS)
Wei, Lehan; Lin, Xiaodong
1997-01-01
By combining optical fibers with piezoelectric scanning Fabry-Perot interferometer, the profiles of Hα and Dα have been determined simultaneously in a single Tokamak discharge. Consequently, the ratio of hydrogen to deuterium and ion temperature are obtained. Not only is the uncertainty of shot-to-shot avoided, the results of the experiment indicate that this instrumentation has the advantage of rapid wavelength scanning, large dispersion, high resolution, and good adaptability of working in adverse circumstances such as at a Tokamak site.
NASA Astrophysics Data System (ADS)
Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.
2016-12-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Simulation of time-dispersion spectral device with sample spectra accumulation
NASA Astrophysics Data System (ADS)
Zhdanov, Arseny; Khansuvarov, Ruslan; Korol, Georgy
2014-09-01
This research is conducted in order to design a spectral device for light sources power spectrum analysis. The spectral device should process radiation from sources, direct contact with radiation of which is either impossible or undesirable. Such sources include jet blast of an aircraft, optical radiation in metallurgy and textile industry. In proposed spectral device optical radiation is guided out of unfavorable environment via a piece of optical fiber with high dispersion. It is necessary for analysis to make samples of analyzed radiation as short pulses. Dispersion properties of such optical fiber cause spectral decomposition of input optical pulses. The faster time of group delay vary the stronger the spectral decomposition effect. This effect allows using optical fiber with high dispersion as a major element of proposed spectral device. Duration of sample must be much shorter than group delay time difference of a dispersive system. In the given frequency range this characteristic has to be linear. The frequency range is 400 … 500 THz for typical optical fiber. Using photonic-crystal fiber (PCF) gives much wider spectral range for analysis. In this paper we propose simulation of single pulse transmission through dispersive system with linear dispersion characteristic and quadratic-detected output responses accumulation. During simulation we propose studying influence of optical fiber dispersion characteristic angle on spectral measurement results. We also consider pulse duration and group delay time difference impact on output pulse shape and duration. Results show the most suitable dispersion characteristic that allow choosing the structure of PCF - major element of time-dispersion spectral analysis method and required number of samples for reliable assessment of measured spectrum.
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
Pisso, I; Myhre, C Lund; Platt, S M; Eckhardt, S; Hermansen, O; Schmidbauer, N; Mienert, J; Vadakkepuliyambatta, S; Bauguitte, S; Pitt, J; Allen, G; Bower, K N; O'Shea, S; Gallagher, M W; Percival, C J; Pyle, J; Cain, M; Stohl, A
2016-12-16
Methane stored in seabed reservoirs such as methane hydrates can reach the atmosphere in the form of bubbles or dissolved in water. Hydrates could destabilize with rising temperature further increasing greenhouse gas emissions in a warming climate. To assess the impact of oceanic emissions from the area west of Svalbard, where methane hydrates are abundant, we used measurements collected with a research aircraft (Facility for Airborne Atmospheric Measurements) and a ship (Helmer Hansen) during the Summer 2014 and for Zeppelin Observatory for the full year. We present a model-supported analysis of the atmospheric CH 4 mixing ratios measured by the different platforms. To address uncertainty about where CH 4 emissions actually occur, we explored three scenarios: areas with known seeps, a hydrate stability model, and an ocean depth criterion. We then used a budget analysis and a Lagrangian particle dispersion model to compare measurements taken upwind and downwind of the potential CH 4 emission areas. We found small differences between the CH 4 mixing ratios measured upwind and downwind of the potential emission areas during the campaign. By taking into account measurement and sampling uncertainties and by determining the sensitivity of the measured mixing ratios to potential oceanic emissions, we provide upper limits for the CH 4 fluxes. The CH 4 flux during the campaign was small, with an upper limit of 2.5 nmol m -2 s -1 in the stability model scenario. The Zeppelin Observatory data for 2014 suggest CH 4 fluxes from the Svalbard continental platform below 0.2 Tg yr -1 . All estimates are in the lower range of values previously reported.
Myhre, C. Lund; Platt, S. M.; Eckhardt, S.; Hermansen, O.; Schmidbauer, N.; Mienert, J.; Vadakkepuliyambatta, S.; Bauguitte, S.; Pitt, J.; Allen, G.; Bower, K. N.; O'Shea, S.; Gallagher, M. W.; Percival, C. J.; Pyle, J.; Cain, M.; Stohl, A.
2016-01-01
Abstract Methane stored in seabed reservoirs such as methane hydrates can reach the atmosphere in the form of bubbles or dissolved in water. Hydrates could destabilize with rising temperature further increasing greenhouse gas emissions in a warming climate. To assess the impact of oceanic emissions from the area west of Svalbard, where methane hydrates are abundant, we used measurements collected with a research aircraft (Facility for Airborne Atmospheric Measurements) and a ship (Helmer Hansen) during the Summer 2014 and for Zeppelin Observatory for the full year. We present a model‐supported analysis of the atmospheric CH4 mixing ratios measured by the different platforms. To address uncertainty about where CH4 emissions actually occur, we explored three scenarios: areas with known seeps, a hydrate stability model, and an ocean depth criterion. We then used a budget analysis and a Lagrangian particle dispersion model to compare measurements taken upwind and downwind of the potential CH4 emission areas. We found small differences between the CH4 mixing ratios measured upwind and downwind of the potential emission areas during the campaign. By taking into account measurement and sampling uncertainties and by determining the sensitivity of the measured mixing ratios to potential oceanic emissions, we provide upper limits for the CH4 fluxes. The CH4 flux during the campaign was small, with an upper limit of 2.5 nmol m−2 s−1 in the stability model scenario. The Zeppelin Observatory data for 2014 suggest CH4 fluxes from the Svalbard continental platform below 0.2 Tg yr−1. All estimates are in the lower range of values previously reported. PMID:28261536
NASA Astrophysics Data System (ADS)
Pisso, I.; Myhre, C. Lund; Platt, S. M.; Eckhardt, S.; Hermansen, O.; Schmidbauer, N.; Mienert, J.; Vadakkepuliyambatta, S.; Bauguitte, S.; Pitt, J.; Allen, G.; Bower, K. N.; O'Shea, S.; Gallagher, M. W.; Percival, C. J.; Pyle, J.; Cain, M.; Stohl, A.
2016-12-01
Methane stored in seabed reservoirs such as methane hydrates can reach the atmosphere in the form of bubbles or dissolved in water. Hydrates could destabilize with rising temperature further increasing greenhouse gas emissions in a warming climate. To assess the impact of oceanic emissions from the area west of Svalbard, where methane hydrates are abundant, we used measurements collected with a research aircraft (Facility for Airborne Atmospheric Measurements) and a ship (Helmer Hansen) during the Summer 2014 and for Zeppelin Observatory for the full year. We present a model-supported analysis of the atmospheric CH4 mixing ratios measured by the different platforms. To address uncertainty about where CH4 emissions actually occur, we explored three scenarios: areas with known seeps, a hydrate stability model, and an ocean depth criterion. We then used a budget analysis and a Lagrangian particle dispersion model to compare measurements taken upwind and downwind of the potential CH4 emission areas. We found small differences between the CH4 mixing ratios measured upwind and downwind of the potential emission areas during the campaign. By taking into account measurement and sampling uncertainties and by determining the sensitivity of the measured mixing ratios to potential oceanic emissions, we provide upper limits for the CH4 fluxes. The CH4 flux during the campaign was small, with an upper limit of 2.5 nmol m-2 s-1 in the stability model scenario. The Zeppelin Observatory data for 2014 suggest CH4 fluxes from the Svalbard continental platform below 0.2 Tg yr-1. All estimates are in the lower range of values previously reported.
NASA Astrophysics Data System (ADS)
Rhome, J. R.; Niyogi, D. D. S.; Raman, S.
- There is an increasing interest regarding the fate of nitrogenous compounds emitted from agricultural activities in the southeastern United States. Varying climate, topography and proximity to the Atlantic Ocean particularly complicates the problem. An increased understanding of the interaction of synoptic scale flow with mesoscale circulations would constitute a significant improvement in the assessment of regional scale transport and deposition potential. This knowledge is necessary to facilitate current and future modeling attempts in the region as well as for planning future monitoring sites to develop a cohesive regional policy for the abatement strategies. The eastern portion of North Carolina is used as a case example due to its high, localized emission of nitrogen compounds from agricultural waste. Three periods: July 2-7, 1998, October 5-11, 1998, and December 12-19, 1998, corresponding to three different seasons were studied. Surface wind and thermodynamic patterns were analyzed using surface observing stations and archived-model analysis results centered over eastern North Carolina. Diurnal and seasonal patterns were identified for dispersion and concentration values obtained using an air pollution transport and dispersion model. This mesoscale information was used to draw qualitative conclusions regarding the possible trends and deviations in the dynamic trajectories as well as the resulting near-surface concentrations and deposition potential in eastern North Carolina. Results show that highly variable seasonal and diurnal atmospheric circulations characterize the study domain. These variations can significantly impact the transport and fate of pollutants released in this region. Generally, summer provides the highest potential for localized deposition, while fall can provide opportunity for long-range transport. The results also suggest that mean climatological or seasonally averaged flow patterns may not be sufficient for analyzing the fate of the agricultural releases in this region. At the very least, mean and variance based analysis is required to capture the climatology of the dispersion and deposition patterns. These patterns in eastern North Carolina appear to be sensitive to the strength and location of air mass boundaries along the coastal plain, indicating diverse scale interactions affecting the variability and uncertainty in the regional pollutant transport.
QUANTIFYING OBSERVATIONAL PROJECTION EFFECTS USING MOLECULAR CLOUD SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, Christopher N.; Offner, Stella S.R.; Shetty, Rahul
2013-11-10
The physical properties of molecular clouds are often measured using spectral-line observations, which provide the only probes of the clouds' velocity structure. It is hard, though, to assess whether and to what extent intensity features in position-position-velocity (PPV) space correspond to 'real' density structures in position-position-position (PPP) space. In this paper, we create synthetic molecular cloud spectral-line maps of simulated molecular clouds, and present a new technique for measuring the reality of individual PPV structures. Using a dendrogram algorithm, we identify hierarchical structures in both PPP and PPV space. Our procedure projects density structures identified in PPP space into correspondingmore » intensity structures in PPV space and then measures the geometric overlap of the projected structures with structures identified from the synthetic observation. The fractional overlap between a PPP and PPV structure quantifies how well the synthetic observation recovers information about the three-dimensional structure. Applying this machinery to a set of synthetic observations of CO isotopes, we measure how well spectral-line measurements recover mass, size, velocity dispersion, and virial parameter for a simulated star-forming region. By disabling various steps of our analysis, we investigate how much opacity, chemistry, and gravity affect measurements of physical properties extracted from PPV cubes. For the simulations used here, which offer a decent, but not perfect, match to the properties of a star-forming region like Perseus, our results suggest that superposition induces a ∼40% uncertainty in masses, sizes, and velocity dispersions derived from {sup 13}CO (J = 1-0). As would be expected, superposition and confusion is worst in regions where the filling factor of emitting material is large. The virial parameter is most affected by superposition, such that estimates of the virial parameter derived from PPV and PPP information typically disagree by a factor of ∼2. This uncertainty makes it particularly difficult to judge whether gravitational or kinetic energy dominate a given region, since the majority of virial parameter measurements fall within a factor of two of the equipartition level α ∼ 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
A stochastic approach to uncertainty quantification in residual moveout analysis
NASA Astrophysics Data System (ADS)
Johng-Ay, T.; Landa, E.; Dossou-Gbété, S.; Bordes, L.
2015-06-01
Oil and gas exploration and production relies usually on the interpretation of a single seismic image, which is obtained from observed data. However, the statistical nature of seismic data and the various approximations and assumptions are sources of uncertainties which may corrupt the evaluation of parameters. The quantification of these uncertainties is a major issue which supposes to help in decisions that have important social and commercial implications. The residual moveout analysis, which is an important step in seismic data processing is usually performed by a deterministic approach. In this paper we discuss a Bayesian approach to the uncertainty analysis.
Social Mating System and Sex-Biased Dispersal in Mammals and Birds: A Phylogenetic Analysis
Mabry, Karen E.; Shelley, Erin L.; Davis, Katie E.; Blumstein, Daniel T.; Van Vuren, Dirk H.
2013-01-01
The hypothesis that patterns of sex-biased dispersal are related to social mating system in mammals and birds has gained widespread acceptance over the past 30 years. However, two major complications have obscured the relationship between these two behaviors: 1) dispersal frequency and dispersal distance, which measure different aspects of the dispersal process, have often been confounded, and 2) the relationship between mating system and sex-biased dispersal in these vertebrate groups has not been examined using modern phylogenetic comparative methods. Here, we present a phylogenetic analysis of the relationship between mating system and sex-biased dispersal in mammals and birds. Results indicate that the evolution of female-biased dispersal in mammals may be more likely on monogamous branches of the phylogeny, and that females may disperse farther than males in socially monogamous mammalian species. However, we found no support for a relationship between social mating system and sex-biased dispersal in birds when the effects of phylogeny are taken into consideration. We caution that although there are larger-scale behavioral differences in mating system and sex-biased dispersal between mammals and birds, mating system and sex-biased dispersal are far from perfectly associated within these taxa. PMID:23483957
NASA Astrophysics Data System (ADS)
Schichtel, Bret A.; Barna, Michael G.; Gebhart, Kristi A.; Malm, William C.
The Big Bend Regional Aerosol and Visibility Observational (BRAVO) study was designed to determine the sources of haze at Big Bend National Park, Texas, using a combination of source and receptor models. BRAVO included an intensive monitoring campaign from July to October 1999 that included the release of perfluorocarbon tracers from four locations at distances 230-750 km from Big Bend and measured at 24 sites. The tracer measurements near Big Bend were used to evaluate the dispersion mechanisms in the REMSAD Eulerian model and the CAPITA Monte Carlo (CMC) Lagrangian model used in BRAVO. Both models used 36 km MM5 wind fields as input. The CMC model also used a combination of routinely available 80 and 190 km wind fields from the National Weather Service's National Centers for Environmental Prediction (NCEP) as input. A model's performance is limited by inherent uncertainties due to errors in the tracer concentrations and a model's inability to simulate sub-resolution variability. A range in the inherent uncertainty was estimated by comparing tracer data at nearby monitoring sites. It was found that the REMSAD and CMC models, using the MM5 wind field, produced performance statistics generally within this inherent uncertainty. The CMC simulation using the NCEP wind fields could reproduce the timing of tracer impacts at Big Bend, but not the concentration values, due to a systematic underestimation. It appears that the underestimation was partly due to excessive vertical dilution from high mixing depths. The model simulations were more sensitive to the input wind fields than the models' different dispersion mechanisms. Comparisons of REMSAD to CMC tracer simulations using the MM5 wind fields had correlations between 0.75 and 0.82, depending on the tracer, but the tracer simulations using the two wind fields in the CMC model had correlations between 0.37 and 0.5.
Atmospheric CO2 observations and models suggest strong carbon uptake by forests in New Zealand
NASA Astrophysics Data System (ADS)
Steinkamp, Kay; Mikaloff Fletcher, Sara E.; Brailsford, Gordon; Smale, Dan; Moore, Stuart; Keller, Elizabeth D.; Baisden, W. Troy; Mukai, Hitoshi; Stephens, Britton B.
2017-01-01
A regional atmospheric inversion method has been developed to determine the spatial and temporal distribution of CO2 sinks and sources across New Zealand for 2011-2013. This approach infers net air-sea and air-land CO2 fluxes from measurement records, using back-trajectory simulations from the Numerical Atmospheric dispersion Modelling Environment (NAME) Lagrangian dispersion model, driven by meteorology from the New Zealand Limited Area Model (NZLAM) weather prediction model. The inversion uses in situ measurements from two fixed sites, Baring Head on the southern tip of New Zealand's North Island (41.408° S, 174.871° E) and Lauder from the central South Island (45.038° S, 169.684° E), and ship board data from monthly cruises between Japan, New Zealand, and Australia. A range of scenarios is used to assess the sensitivity of the inversion method to underlying assumptions and to ensure robustness of the results. The results indicate a strong seasonal cycle in terrestrial land fluxes from the South Island of New Zealand, especially in western regions covered by indigenous forest, suggesting higher photosynthetic and respiratory activity than is evident in the current a priori land process model. On the annual scale, the terrestrial biosphere in New Zealand is estimated to be a net CO2 sink, removing 98 (±37) Tg CO2 yr-1 from the atmosphere on average during 2011-2013. This sink is much larger than the reported 27 Tg CO2 yr-1 from the national inventory for the same time period. The difference can be partially reconciled when factors related to forest and agricultural management and exports, fossil fuel emission estimates, hydrologic fluxes, and soil carbon change are considered, but some differences are likely to remain. Baseline uncertainty, model transport uncertainty, and limited sensitivity to the northern half of the North Island are the main contributors to flux uncertainty.
NASA Astrophysics Data System (ADS)
Gilbert, Karoline M.; Tollerud, Erik; Beaton, Rachael L.; Guhathakurta, Puragra; Bullock, James S.; Chiba, Masashi; Kalirai, Jason S.; Kirby, Evan N.; Majewski, Steven R.; Tanaka, Mikito
2018-01-01
We present the velocity dispersion of red giant branch stars in M31’s halo, derived by modeling the line-of-sight velocity distribution of over 5000 stars in 50 fields spread throughout M31’s stellar halo. The data set was obtained as part of the Spectroscopic and Photometric Landscape of Andromeda’s Stellar Halo (SPLASH) Survey, and covers projected radii of 9 to 175 kpc from M31’s center. All major structural components along the line of sight in both the Milky Way (MW) and M31 are incorporated in a Gaussian Mixture Model, including all previously identified M31 tidal debris features in the observed fields. The probability that an individual star is a constituent of M31 or the MW, based on a set of empirical photometric and spectroscopic diagnostics, is included as a prior probability in the mixture model. The velocity dispersion of stars in M31’s halo is found to decrease only mildly with projected radius, from 108 km s‑1 in the innermost radial bin (8.2 to 14.1 kpc) to ∼80 to 90 km s‑1 at projected radii of ∼40–130 kpc, and can be parameterized with a power law of slope ‑0.12 ± 0.05. The quoted uncertainty on the power-law slope reflects only the precision of the method, although other sources of uncertainty we consider contribute negligibly to the overall error budget. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallimore, David L.
2012-06-13
The measurement uncertainty estimatino associated with trace element analysis of impurities in U and Pu was evaluated using the Guide to the Expression of Uncertainty Measurement (GUM). I this evalution the uncertainty sources were identified and standard uncertainties for the components were categorized as either Type A or B. The combined standard uncertainty was calculated and a coverage factor k = 2 was applied to obtain the expanded uncertainty, U. The ICP-AES and ICP-MS methods used were deveoped for the multi-element analysis of U and Pu samples. A typical analytical run consists of standards, process blanks, samples, matrix spiked samples,more » post digestion spiked samples and independent calibration verification standards. The uncertainty estimation was performed on U and Pu samples that have been analyzed previously as part of the U and Pu Sample Exchange Programs. Control chart results and data from the U and Pu metal exchange programs were combined with the GUM into a concentration dependent estimate of the expanded uncertainty. Comparison of trace element uncertainties obtained using this model was compared to those obtained for trace element results as part of the Exchange programs. This process was completed for all trace elements that were determined to be above the detection limit for the U and Pu samples.« less
Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach
NASA Astrophysics Data System (ADS)
Rodrigues, D. B. B.
2015-12-01
Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.
Genomic evidence for an African expansion of anatomically modern humans by a Southern route.
Ghirotto, Silvia; Penso-Dolfin, Luca; Barbujani, Guido
2011-08-01
There is general agreement among scientists about a recent (less than 200,000 yrs ago) African origin of anatomically modern humans, whereas there is still uncertainty about whether, and to what extent, they admixed with archaic populations, which thus may have contributed to the modern populations' gene pools. Data on cranial morphology have been interpreted as suggesting that, before the main expansion from Africa through the Near East, anatomically modern humans may also have taken a Southern route from the Horn of Africa through the Arabian peninsula to India, Melanesia and Australia, about 100,000 yrs ago. This view was recently supported by archaeological findings demonstrating human presence in Eastern Arabia >90,000 yrs ago. In this study we analyzed genetic variation at 111,197 nuclear SNPs in nine populations (Kurumba, Chenchu, Kamsali, Madiga, Mala, Irula, Dalit, Chinese, Japanese), chosen because their genealogical relationships are expected to differ under the alternative models of expansion (single vs. multiple dispersals). We calculated correlations between genomic distances, and geographic distances estimated under the alternative assumptions of a single dispersal, or multiple dispersals, and found a significantly stronger association for the multiple dispersal model. If confirmed, this result would cast doubts on the possibility that some non-African populations (i.e., those whose ancestors expanded through the Southern route) may have had any contacts with Neandertals.
Assessing uncertainties in surface water security: An empirical multimodel approach
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
The neutron transmission of natFe, 197Au and natW
NASA Astrophysics Data System (ADS)
Beyer, Roland; Junghans, Arnd R.; Schillebeeckx, Peter; Sirakov, Ivan; Song, Tae-Yung; Bemmerer, Daniel; Capote, Roberto; Ferrari, Anna; Hartmann, Andreas; Hannaske, Ronald; Heyse, Jan; Il Kim, Hyeon; Woon Kim, Jong; Kögler, Toni; Woo Lee, Cheol; Lee, Young-Ouk; Massarczyk, Ralph; Müller, Stefan E.; Reinhardt, Tobias P.; Röder, Marko; Schmidt, Konrad; Schwengner, Ronald; Szücs, Tamás; Takács, Marcell P.; Wagner, Andreas; Wagner, Louis; Yang, Sung-Chul
2018-05-01
Neutron total cross sections of natFe, 197Au and natW have been measured at the n ELBE neutron time-of-flight facility in the energy range 0.15-8MeV with an uncertainty due to counting statistics of up to 2% and a total uncertainty due to systematic effects of 1%. The neutrons are produced with the superconducting electron accelerator ELBE using a liquid lead circuit as photo-neutron target. By periodical sample-in-sample-out measurements the transmission of the sample materials has been determined using a low-threshold plastic scintillation detector. The resulting effective total cross sections show good agreement with previously measured data that cover only part of the energy range available at n ELBE. The results have also been compared to evaluated library files and recent calculations based on a dispersive coupled channel optical model potential.
Effect of Refractive Index Variation on Two-Wavelength Interferometry for Fluid Measurements
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
1998-01-01
Two wavelength interferometry can in principle be used to measure changes in both temperature and concentration in a fluid, but measurement errors may be large if the fluid dispersion is small. This paper quantifies the effects of uncertainties in dn/dT and dn/dC on the measured temperature and concentration when using the simple expression dn = (dn/dT)dT + (dn/dC)dC. For the data analyzed here, ammonium chloride in water from -5 to 10(exp infinity) C over a concentration range of 2-14% and for wavelengths 514.5 and 633 nm, it is shown that the gradients must be known to within 0.015% to produce a modest 10% uncertainty in the measured temperature and concentration. These results show that real care must be taken to ensure the accuracy of refractive index gradients when using two wavelength interferometry for the simultaneous measurement of temperature and concentration.
Sugimoto, Tomohiro
2016-10-01
This paper presents a nondestructive and non-exact-index-matching method for measuring the refractive index distribution of a glass molded lens with high refractivity. The method measures two-wavelength wavefronts of a test lens immersed in a liquid with a refractive index dispersion different from that of the test lens and calculates the refractive index distribution by eliminating the refractive index distribution error caused by the shape error of the test lens. The estimated uncertainties of the refractive index distributions of test lenses with nd≈1.77 and nd≈1.85 were 1.9×10-5 RMS and 2.4×10-5 RMS, respectively. I validated the proposed method by evaluating the agreement between the estimated uncertainties and experimental values.
NASA Astrophysics Data System (ADS)
Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em
2017-11-01
The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.
Precise CCD positions of Triton in 2014-2016 from the Gaia DR1
NASA Astrophysics Data System (ADS)
Wang, N.; Peng, Q. Y.; Peng, H. W.; Zhang, Q. F.
2018-04-01
755 CCD observations during the years 2014-2016 have been reduced to derive the precise positions of Triton, the first satellite of Neptune. The observations were made by the 1 m telescope at Yunnan Observatory over 15 nights during the years 2014-2016. The theoretical position of Triton was retrieved from the Jet Propulsion Laboratory Horizons system. Our results show that when the newest Gaia catalogue (Gaia DR1) is referred to the mean O-Cs (observed minus computed) residuals are about 0.042 and -0.006 arcsec, the dispersions are 0.012 and 0.012 arcsec in right ascension and declination, respectively. The dispersions are improved very significantly when the Gaia DR1 is referred to. However, the agreement in right ascension is not so good as that in declination, the reason might come from the uncertainty of planet ephemeris. More observations are needed to confirm this.
The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst.
Ravi, V; Shannon, R M; Bailes, M; Bannister, K; Bhandari, S; Bhat, N D R; Burke-Spolaor, S; Caleb, M; Flynn, C; Jameson, A; Johnston, S; Keane, E F; Kerr, M; Tiburzi, C; Tuntsov, A V; Vedantham, H K
2016-12-09
Fast radio bursts (FRBs) are millisecond-duration events thought to originate beyond the Milky Way galaxy. Uncertainty surrounding the burst sources, and their propagation through intervening plasma, has limited their use as cosmological probes. We report on a mildly dispersed (dispersion measure 266.5 ± 0.1 parsecs per cubic centimeter), exceptionally intense (120 ± 30 janskys), linearly polarized, scintillating burst (FRB 150807) that we directly localize to 9 square arc minutes. On the basis of a low Faraday rotation (12.0 ± 0.7 radians per square meter), we infer negligible magnetization in the circum-burst plasma and constrain the net magnetization of the cosmic web along this sightline to <21 nanogauss, parallel to the line-of-sight. The burst scintillation suggests weak turbulence in the ionized intergalactic medium. Copyright © 2016, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Holistic uncertainty analysis in river basin modeling for climate vulnerability assessment
NASA Astrophysics Data System (ADS)
Taner, M. U.; Wi, S.; Brown, C.
2017-12-01
The challenges posed by uncertain future climate are a prominent concern for water resources managers. A number of frameworks exist for assessing the impacts of climate-related uncertainty, including internal climate variability and anthropogenic climate change, such as scenario-based approaches and vulnerability-based approaches. While in many cases climate uncertainty may be dominant, other factors such as future evolution of the river basin, hydrologic response and reservoir operations are potentially significant sources of uncertainty. While uncertainty associated with modeling hydrologic response has received attention, very little attention has focused on the range of uncertainty and possible effects of the water resources infrastructure and management. This work presents a holistic framework that allows analysis of climate, hydrologic and water management uncertainty in water resources systems analysis with the aid of a water system model designed to integrate component models for hydrology processes and water management activities. The uncertainties explored include those associated with climate variability and change, hydrologic model parameters, and water system operation rules. A Bayesian framework is used to quantify and model the uncertainties at each modeling steps in integrated fashion, including prior and the likelihood information about model parameters. The framework is demonstrated in a case study for the St. Croix Basin located at border of United States and Canada.
Kramer, Karen L; Schacht, Ryan; Bell, Adrian
2017-09-19
Small populations are susceptible to high genetic loads and random fluctuations in birth and death rates. While these selective forces can adversely affect their viability, small populations persist across taxa. Here, we investigate the resilience of small groups to demographic uncertainty, and specifically to fluctuations in adult sex ratio (ASR), partner availability and dispersal patterns. Using 25 years of demographic data for two Savannah Pumé groups of South American hunter-gatherers, we show that in small human populations: (i) ASRs fluctuate substantially from year to year, but do not consistently trend in a sex-biased direction; (ii) the primary driver of local variation in partner availability is stochasticity in the sex ratio at maturity; and (iii) dispersal outside of the group is an important behavioural means to mediate locally constrained mating options. To then simulate conditions under which dispersal outside of the local group may have evolved, we develop two mathematical models. Model results predict that if the ASR is biased, the globally rarer sex should disperse. The model's utility is then evaluated by applying our empirical data to this central prediction. The results are consistent with the observed hunter-gatherer pattern of variation in the sex that disperses. Together, these findings offer an alternative explanation to resource provisioning for the evolution of traits central to human sociality (e.g. flexible dispersal, bilocal post-marital residence and cooperation across local groups). We argue that in small populations, looking outside of one's local group is necessary to find a mate and that, motivated by ASR imbalance, the alliances formed to facilitate the movement of partners are an important foundation for the human-typical pattern of network formation across local groups.This article is part of the themed issue 'Adult sex ratios and reproductive decisions: a critical re-examination of sex differences in human and animal societies'. © 2017 The Author(s).
Time-Frequency Analysis of the Dispersion of Lamb Modes
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Seale, Michael D.; Smith, Barry T.
1999-01-01
Accurate knowledge of the velocity dispersion of Lamb modes is important for ultrasonic nondestructive evaluation methods used in detecting and locating flaws in thin plates and in determining their elastic stiffness coefficients. Lamb mode dispersion is also important in the acoustic emission technique for accurately triangulating the location of emissions in thin plates. In this research, the ability to characterize Lamb mode dispersion through a time-frequency analysis (the pseudo Wigner-Ville distribution) was demonstrated. A major advantage of time-frequency methods is the ability to analyze acoustic signals containing multiple propagation modes, which overlap and superimpose in the time domain signal. By combining time-frequency analysis with a broadband acoustic excitation source, the dispersion of multiple Lamb modes over a wide frequency range can be determined from as little as a single measurement. In addition, the technique provides a direct measurement of the group velocity dispersion. The technique was first demonstrated in the analysis of a simulated waveform in an aluminum plate in which the Lamb mode dispersion was well known. Portions of the dispersion curves of the A(sub 0), A(sub 1), S(sub 0), and S(sub 2)Lamb modes were obtained from this one waveform. The technique was also applied for the analysis of experimental waveforms from a unidirectional graphite/epoxy composite plate. Measurements were made both along, and perpendicular to the fiber direction. In this case, the signals contained only the lowest order symmetric and antisymmetric modes. A least squares fit of the results from several source to detector distances was used. Theoretical dispersion curves were calculated and are shown to be in good agreement with experimental results.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Robustness analysis of non-ordinary Petri nets for flexible assembly systems
NASA Astrophysics Data System (ADS)
Hsieh, Fu-Shiung
2010-05-01
Non-ordinary controlled Petri nets (NCPNs) have the advantages to model flexible assembly systems in which multiple identical resources may be required to perform an operation. However, existing studies on NCPNs are still limited. For example, the robustness properties of NCPNs have not been studied. This motivates us to develop an analysis method for NCPNs. Robustness analysis concerns the ability for a system to maintain operation in the presence of uncertainties. It provides an alternative way to analyse a perturbed system without reanalysis. In our previous research, we have analysed the robustness properties of several subclasses of ordinary controlled Petri nets. To study the robustness properties of NCPNs, we augment NCPNs with an uncertainty model, which specifies an upper bound on the uncertainties for each reachable marking. The resulting PN models are called non-ordinary controlled Petri nets with uncertainties (NCPNU). Based on NCPNU, the problem is to characterise the maximal tolerable uncertainties for each reachable marking. The computational complexities to characterise maximal tolerable uncertainties for each reachable marking grow exponentially with the size of the nets. Instead of considering general NCPNU, we limit our scope to a subclass of PN models called non-ordinary controlled flexible assembly Petri net with uncertainties (NCFAPNU) for assembly systems and study its robustness. We will extend the robustness analysis to NCFAPNU. We identify two types of uncertainties under which the liveness of NCFAPNU can be maintained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
NASA Astrophysics Data System (ADS)
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
Designing optimal greenhouse gas monitoring networks for Australia
NASA Astrophysics Data System (ADS)
Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.
2016-01-01
Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.
Estimation of the dispersal distances of an aphid-borne virus in a patchy landscape
Soubeyrand, Samuel; Dallot, Sylvie; Labonne, Gérard; Chadœuf, Joël; Jacquot, Emmanuel
2018-01-01
Characterising the spatio-temporal dynamics of pathogens in natura is key to ensuring their efficient prevention and control. However, it is notoriously difficult to estimate dispersal parameters at scales that are relevant to real epidemics. Epidemiological surveys can provide informative data, but parameter estimation can be hampered when the timing of the epidemiological events is uncertain, and in the presence of interactions between disease spread, surveillance, and control. Further complications arise from imperfect detection of disease and from the huge number of data on individual hosts arising from landscape-level surveys. Here, we present a Bayesian framework that overcomes these barriers by integrating over associated uncertainties in a model explicitly combining the processes of disease dispersal, surveillance and control. Using a novel computationally efficient approach to account for patch geometry, we demonstrate that disease dispersal distances can be estimated accurately in a patchy (i.e. fragmented) landscape when disease control is ongoing. Applying this model to data for an aphid-borne virus (Plum pox virus) surveyed for 15 years in 605 orchards, we obtain the first estimate of the distribution of flight distances of infectious aphids at the landscape scale. About 50% of aphid flights terminate beyond 90 m, which implies that most infectious aphids leaving a tree land outside the bounds of a 1-ha orchard. Moreover, long-distance flights are not rare–10% of flights exceed 1 km. By their impact on our quantitative understanding of winged aphid dispersal, these results can inform the design of management strategies for plant viruses, which are mainly aphid-borne. PMID:29708968
Jeans that fit: weighing the mass of the Milky Way analogues in the ΛCDM universe
NASA Astrophysics Data System (ADS)
Kafle, Prajwal R.; Sharma, Sanjib; Robotham, Aaron S. G.; Elahi, Pascal J.; Driver, Simon P.
2018-04-01
The spherical Jeans equation is a widely used tool for dynamical study of gravitating systems in astronomy. Here, we test its efficacy in robustly weighing the mass of Milky Way analogues, given they need not be in equilibrium or even spherical. Utilizing Milky Way stellar haloes simulated in accordance with Λ cold dark matter (ΛCDM) cosmology by Bullock and Johnston and analysing them under the Jeans formalism, we recover the underlying mass distribution of the parent galaxy, within distance r/kpc ∈ [10, 100], with a bias of ˜ 12 per cent and a dispersion of ˜ 14 per cent. Additionally, the mass profiles of triaxial dark matter haloes taken from the SURFS simulation, within scaled radius 0.2 < r/rmax < 3, are measured with a bias of ˜ - 2.4 per cent and a dispersion of ˜ 10 per cent. The obtained dispersion is not because of Poisson noise due to small particle numbers as it is twice the later. We interpret the dispersion to be due to the inherent nature of the ΛCDM haloes, for example being aspherical and out-of-equilibrium. Hence, the dispersion obtained for stellar haloes sets a limit of about 12 per cent (after adjusting for random uncertainty) on the accuracy with which the mass profiles of the Milky Way-like galaxies can be reconstructed using the spherical Jeans equation. This limit is independent of the quantity and quality of the observational data. The reason for a non-zero bias is not clear, hence its interpretation is not obvious at this stage.
New analysis strategies for micro aspheric lens metrology
NASA Astrophysics Data System (ADS)
Gugsa, Solomon Abebe
Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.
Estimating Uncertainty in N2O Emissions from US Cropland Soils
USDA-ARS?s Scientific Manuscript database
A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Between Domain Cognitive Dispersion and Functional Abilities in Older Adults
Fellows, Robert P.; Schmitter-Edgecombe, Maureen
2016-01-01
Objective Within-person variability in cognitive performance is related to neurological integrity, but the association with functional abilities is less clear. The primary aim of this study was to examine the association between cognitive dispersion, or within-person variability, and everyday multitasking and the way in which these variables may influence performance on a naturalistic assessment of functional abilities. Method Participants were 156 community-dwelling adults, age 50 or older. Cognitive dispersion was calculated by measuring within-person variability in cognitive domains, established through principal components analysis. Path analysis was used to determine the independent contribution of cognitive dispersion to functional ability, mediated by multitasking. Results Results of the path analysis revealed that the number of subtasks interweaved (i.e., multitasked) mediated the association between cognitive dispersion and task sequencing and accuracy. Although increased multitasking was associated with worse task performance in the path model, secondary analyses revealed that for individuals with low cognitive dispersion, increased multitasking was associated with better task performance, whereas for those with higher levels of dispersion multitasking was negatively correlated with task performance. Conclusion These results suggest that cognitive dispersion between domains may be a useful indicator of multitasking and daily living skills among older adults. PMID:26300441
Automated Dispersion and Orientation Analysis for Carbon Nanotube Reinforced Polymer Composites
Gao, Yi; Li, Zhuo; Lin, Ziyin; Zhu, Liangjia; Tannenbaum, Allen; Bouix, Sylvain; Wong, C.P.
2012-01-01
The properties of carbon nanotube (CNT)/polymer composites are strongly dependent on the dispersion and orientation of CNTs in the host matrix. Quantification of the dispersion and orientation of CNTs by microstructure observation and image analysis has been demonstrated as a useful way to understand the structure-property relationship of CNT/polymer composites. However, due to the various morphologies and large amount of CNTs in one image, automatic and accurate identification of CNTs has become the bottleneck for dispersion/orientation analysis. To solve this problem, shape identification is performed for each pixel in the filler identification step, so that individual CNT can be exacted from images automatically. The improved filler identification enables more accurate analysis of CNT dispersion and orientation. The obtained dispersion index and orientation index of both synthetic and real images from model compounds correspond well with the observations. Moreover, these indices help to explain the electrical properties of CNT/Silicone composite, which is used as a model compound. This method can also be extended to other polymer composites with high aspect ratio fillers. PMID:23060008
Initial sediment transport model of the mining-affected Aries River Basin, Romania
Friedel, Michael J.; Linard, Joshua I.
2008-01-01
The Romanian government is interested in understanding the effects of existing and future mining activities on long-term dispersal, storage, and remobilization of sediment-associated metals. An initial Soil and Water Assessment Tool (SWAT) model was prepared using available data to evaluate hypothetical failure of the Valea Sesei tailings dam at the Rosia Poieni mine in the Aries River basin. Using the available data, the initial Aries River Basin SWAT model could not be manually calibrated to accurately reproduce monthly streamflow values observed at the Turda gage station. The poor simulation of the monthly streamflow is attributed to spatially limited soil and precipitation data, limited constraint information due to spatially and temporally limited streamflow measurements, and in ability to obtain optimal parameter values when using a manual calibration process. Suggestions to improve the Aries River basin sediment transport model include accounting for heterogeneity in model input, a two-tier nonlinear calibration strategy, and analysis of uncertainty in predictions.
OSO 8 X-ray spectra of clusters of galaxies. II - Discussion
NASA Technical Reports Server (NTRS)
Smith, B. W.; Mushotzky, R. F.; Serlemitsos, P. J.
1979-01-01
An observational description of X-ray clusters of galaxies is given based on OSO 8 X-ray results for spatially integrated spectra of 20 such clusters and various correlations obtained from these results. It is found from a correlation between temperature and velocity dispersion that the X-ray core radius should be less than the galaxy core radius or, alternatively, that the polytropic index is about 1.1 for most of the 20 clusters. Analysis of a correlation between temperature and emission integral yields evidence that more massive clusters accumulate a larger fraction of their mass as intracluster gas. Galaxy densities and optical morphology, as they correlate with X-ray properties, are reexamined for indications as to how mass injection by galaxies affects the density structure of the gas. The physical arguments used to derive iron abundances from observed equivalent widths of iron line features in X-ray spectra are critically evaluated, and the associated uncertainties in abundances derived in this manner are estimated to be quite large.
Van Eaton, Alexa R.; Behnke, Sonja Ann; Amigo, Alvaro; ...
2016-04-12
Soon after the onset of an eruption, model forecasts of ash dispersal are used to mitigate the hazards to aircraft, infrastructure, and communities downwind. However, it is a significant challenge to constrain the model inputs during an evolving eruption. Here we demonstrate that volcanic lightning may be used in tandem with satellite detection to recognize and quantify changes in eruption style and intensity. Using the eruption of Calbuco volcano in southern Chile on 22 and 23 April 2015, we investigate rates of umbrella cloud expansion from satellite observations, occurrence of lightning, and mapped characteristics of the fall deposits. Our remotemore » sensing analysis gives a total erupted volume that is within uncertainty of the mapped volume (0.56 ± 0.28 km3 bulk). Furthermore, observations and volcanic plume modeling further suggest that electrical activity was enhanced both by ice formation in the ash clouds >10 km above sea level and development of a low-level charge layer from ground-hugging currents.« less
Van Eaton, Alexa; Amigo, Álvaro; Bertin, Daniel; Mastin, Larry G.; Giacosa, Raúl E; González, Jerónimo; Valderrama, Oscar; Fontijn, Karen; Behnke, Sonja A
2016-01-01
Soon after the onset of an eruption, model forecasts of ash dispersal are used to mitigate the hazards to aircraft, infrastructure and communities downwind. However, it is a significant challenge to constrain the model inputs during an evolving eruption. Here we demonstrate that volcanic lightning may be used in tandem with satellite detection to recognize and quantify changes in eruption style and intensity. Using the eruption of Calbuco volcano in southern Chile on 22-23 April 2015, we investigate rates of umbrella cloud expansion from satellite observations, occurrence of lightning, and mapped characteristics of the fall deposits. Our remote-sensing analysis gives a total erupted volume that is within uncertainty of the mapped volume (0.56 ±0.28 km3 bulk). Observations and volcanic plume modeling further suggest that electrical activity was enhanced both by ice formation in the ash clouds >10 km asl and development of a low-level charge layer from ground-hugging currents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Eaton, Alexa R.; Behnke, Sonja Ann; Amigo, Alvaro
Soon after the onset of an eruption, model forecasts of ash dispersal are used to mitigate the hazards to aircraft, infrastructure, and communities downwind. However, it is a significant challenge to constrain the model inputs during an evolving eruption. Here we demonstrate that volcanic lightning may be used in tandem with satellite detection to recognize and quantify changes in eruption style and intensity. Using the eruption of Calbuco volcano in southern Chile on 22 and 23 April 2015, we investigate rates of umbrella cloud expansion from satellite observations, occurrence of lightning, and mapped characteristics of the fall deposits. Our remotemore » sensing analysis gives a total erupted volume that is within uncertainty of the mapped volume (0.56 ± 0.28 km3 bulk). Furthermore, observations and volcanic plume modeling further suggest that electrical activity was enhanced both by ice formation in the ash clouds >10 km above sea level and development of a low-level charge layer from ground-hugging currents.« less
Laboratory simulations of atmospheric entry of micrometeoroids: ablation of magnesium
NASA Astrophysics Data System (ADS)
Bones, David; Gomez Martin, Juan Carlos; Diego Carrillo Sanchez, Juan; Dobson, Alexander; Plane, John
2017-04-01
We address the uncertainty in the cosmic dust input into the Earth's atmosphere by simulating the atmospheric entry of micrometeoroids in a custom built chamber, capable of heating particles to 3000 K in 2 s and able to precisely reproduce representative heating profiles. In lieu of interplanetary cosmic dust, we use a range of ground-up recovered meteorites and mineral analogues. We measure the ablation of two metals simultaneously with laser induced fluorescence (LIF). The resulting ablation profiles can be compared with the composition of the remaining, unablated particle, as determined from scanning electron microscopy-energy dispersive x-ray (SEM-EDX) analysis. Building on earlier studies of Na, Fe and Ca, here we present Mg profiles and compare them with results from our chemical ablation model (CABMOD). In general, Mg behaves as predicted, beginning to ablate steadily as one broad ablation peak once temperatures reach 2000 K. In contrast Fe, which should behave similarly to Mg, typically has two ablation peaks due to being present in two distinct phases.
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Helgeson, Casey; Elsawah, Sondoss; Jakeman, Anthony J.; Kummu, Matti
2017-08-01
Uncertainty is recognized as a key issue in water resources research, among other sciences. Discussions of uncertainty typically focus on tools and techniques applied within an analysis, e.g., uncertainty quantification and model validation. But uncertainty is also addressed outside the analysis, in writing scientific publications. The language that authors use conveys their perspective of the role of uncertainty when interpreting a claim—what we call here "framing" the uncertainty. This article promotes awareness of uncertainty framing in four ways. (1) It proposes a typology of eighteen uncertainty frames, addressing five questions about uncertainty. (2) It describes the context in which uncertainty framing occurs. This is an interdisciplinary topic, involving philosophy of science, science studies, linguistics, rhetoric, and argumentation. (3) We analyze the use of uncertainty frames in a sample of 177 abstracts from the Water Resources Research journal in 2015. This helped develop and tentatively verify the typology, and provides a snapshot of current practice. (4) We make provocative recommendations to achieve a more influential, dynamic science. Current practice in uncertainty framing might be described as carefully considered incremental science. In addition to uncertainty quantification and degree of belief (present in ˜5% of abstracts), uncertainty is addressed by a combination of limiting scope, deferring to further work (˜25%) and indicating evidence is sufficient (˜40%)—or uncertainty is completely ignored (˜8%). There is a need for public debate within our discipline to decide in what context different uncertainty frames are appropriate. Uncertainty framing cannot remain a hidden practice evaluated only by lone reviewers.
Disk mass and disk heating in the spiral galaxy NGC 3223
NASA Astrophysics Data System (ADS)
Gentile, G.; Tydtgat, C.; Baes, M.; De Geyter, G.; Koleva, M.; Angus, G. W.; de Blok, W. J. G.; Saftly, W.; Viaene, S.
2015-04-01
We present the stellar and gaseous kinematics of an Sb galaxy, NGC 3223, with the aim of determining the vertical and radial stellar velocity dispersion as a function of radius, which can help to constrain disk heating theories. Together with the observed NIR photometry, the vertical velocity dispersion is also used to determine the stellar mass-to-light (M/L) ratio, typically one of the largest uncertainties when deriving the dark matter distribution from the observed rotation curve. We find a vertical-to-radial velocity dispersion ratio of σz/σR = 1.21 ± 0.14, significantly higher than expectations from known correlations, and a weakly-constrained Ks-band stellar M/L ratio in the range 0.5-1.7, which is at the high end of (but consistent with) the predictions of stellar population synthesis models. Such a weak constraint on the stellar M/L ratio, however, does not allow us to securely determine the dark matter density distribution. To achieve this, either a statistical approach or additional data (e.g. integral-field unit) are needed. Based on observations collected at the European Southern Observatory, Chile, under proposal 68.B-0588.
1981-07-01
Dennis M. Lavoie of NORDA for chemical analysis of clay minerals with the x-ray energy dispersive spectrometer. We thank Fred Bowles, Peter Fleischer...diffractograi of Nuculana acuta fecal pellet 11 residue (illite experiment). TABLES TABLE 1. X-ray energy dispersive spectrometer chemical 8 analysis for...inontmorillonite experiments. Counts for elements after background counts removed. TABLE 2. X-ray energy dispersive spectroneter chemical analysis 12 for
NASA Astrophysics Data System (ADS)
Devendran, A. A.; Lakshmanan, G.
2014-11-01
Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.
NASA Astrophysics Data System (ADS)
Brown, Roderick W.; Beucher, Romain; Roper, Steven; Persano, Cristina; Stuart, Fin; Fitzgerald, Paul
2013-12-01
Over the last decade major progress has been made in developing both the theoretical and practical aspects of apatite (U-Th)/He thermochronometry and it is now standard practice, and generally seen as best practice, to analyse single grain aliquots. These individual prismatic crystals are often broken and are fragments of larger crystals that have broken during mineral separation along the weak basal cleavage in apatite. This is clearly indicated by the common occurrence of only 1 or no clear crystal terminations present on separated apatite grains, and evidence of freshly broken ends when grains are viewed using a scanning electron microscope. This matters because if the 4He distribution within the whole grain is not homogeneous, because of partial loss due to thermal diffusion for example, then the fragments will all yield ages different from each other and from the whole grain age. Here we use a numerical model with a finite cylinder geometry to approximate 4He ingrowth and thermal diffusion within hexagonal prismatic apatite crystals. This is used to quantify the amount and patterns of inherent, natural age dispersion that arises from analysing broken crystals. A series of systematic numerical experiments were conducted to explore and quantify the pattern and behaviour of this source of dispersion using a set of 5 simple thermal histories that represent a range of plausible geological scenarios. In addition some more complex numerical experiments were run to investigate the pattern and behaviour of grain dispersion seen in several real data sets. The results indicate that natural dispersion of a set of single fragment ages (defined as the range divided by the mean) arising from fragmentation alone varies from c. 7% even for rapid (c. 10 °C/Ma), monotonic cooling to over 50% for protracted, complex histories that cause significant diffusional loss of 4He. The magnitude of dispersion arising from fragmentation scales with the grain cylindrical radius, and is of a similar magnitude to dispersion expected from differences in absolute grain size alone (spherical equivalent radii of 40-150 μm). This source of dispersion is significant compared with typical analytical uncertainties on individual grain analyses (c. 6%) and standard deviations on multiple grain analyses from a single sample (c. 10-20%). Where there is a significant difference in the U and Th concentration of individual grains (eU), the effect of radiation damage accumulation on 4He diffusivity (assessed using the RDAAM model of Flowers et al. (2009)) is the primary cause of dispersion for samples that have experienced a protracted thermal history, and can cause dispersion in excess of 100% for realistic ranges of eU concentration (i.e. 5-100 ppm). Expected natural dispersion arising from the combined effects of reasonable variations in grain size (radii 40-125 μm), eU concentration (5-150 ppm) and fragmentation would typically exceed 100% for complex thermal histories. In addition to adding a significant component of natural dispersion to analyses, the effect of fragmentation also acts to decouple and corrupt expected correlations between grain ages and absolute grain size and to a lesser extent between grain age and effective uranium concentration (eU). Considering fragmentation explicitly as a source of dispersion and analysing how the different sources of natural dispersion all interact with each other provides a quantitative framework for understanding patterns of dispersion that otherwise appear chaotic. An important outcome of these numerical experiments is that they demonstrate that the pattern of age dispersion arising from fragmentation mimics the pattern of 4He distribution within the whole grains, thus providing an important source of information about the thermal history of the sample. We suggest that if the primary focus of a study is to extract the thermal history information from (U-Th)/He analyses then sampling and analytical strategies should aim to maximise the natural dispersion of grain ages, not minimise it, and should aim to analyse circa 20-30 grains from each sample. The key observations and conclusions drawn here are directly applicable to other thermochronometers, such as the apatite, rutile and titanite U-Pb systems, where the diffusion domain is approximated by the physical grain size.
NASA Astrophysics Data System (ADS)
Arnbjerg-Nielsen, Karsten; Zhou, Qianqian
2014-05-01
There has been a significant increase in climatic extremes in many regions. In Central and Northern Europe, this has led to more frequent and more severe floods. Along with improved flood modelling technologies this has enabled development of economic assessment of climate change adaptation to increasing urban flood risk. Assessment of adaptation strategies often requires a comprehensive risk-based economic analysis of current risk, drivers of change of risk over time, and measures to reduce the risk. However, such studies are often associated with large uncertainties. The uncertainties arise from basic assumptions in the economic analysis and the hydrological model, but also from the projection of future societies to local climate change impacts and suitable adaptation options. This presents a challenge to decision makers when trying to identify robust measures. We present an integrated uncertainty analysis, which can assess and quantify the overall uncertainty in relation to climate change adaptation to urban flash floods. The analysis is based on an uncertainty cascade that by means of Monte Carlo simulations of flood risk assessments incorporates climate change impacts as a key driver of risk changes over time. The overall uncertainty is then attributed to six bulk processes: climate change impact, urban rainfall-runoff processes, stage-depth functions, unit cost of repair, cost of adaptation measures, and discount rate. We apply the approach on an urban hydrological catchment in Odense, Denmark, and find that the uncertainty on the climate change impact appears to have the least influence on the net present value of the studied adaptation measures-. This does not imply that the climate change impact is not important, but that the uncertainties are not dominating when deciding on action or in-action. We then consider the uncertainty related to choosing between adaptation options given that a decision of action has been taken. In this case the major part of the uncertainty on the estimated net present values is identical for all adaptation options and will therefore not affect a comparison between adaptation measures. This makes the chose among the options easier. Furthermore, the explicit attribution of uncertainty also enables a reduction of the overall uncertainty by identifying the processes which contributes the most. This knowledge can then be used to further reduce the uncertainty related to decision making, as a substantial part of the remaining uncertainty is epistemic.
Traceable Coulomb blockade thermometry
NASA Astrophysics Data System (ADS)
Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.
2017-02-01
We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k = 1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Mehrdad
Characterizing the near-surface shear-wave velocity structure using Rayleigh-wave phase velocity dispersion curves is widespread in the context of reservoir characterization, exploration seismology, earthquake engineering, and geotechnical engineering. This surface seismic approach provides a feasible and low-cost alternative to the borehole measurements. Phase velocity dispersion curves from Rayleigh surface waves are inverted to yield the vertical shear-wave velocity profile. A significant problem with the surface wave inversion is its intrinsic non-uniqueness, and although this problem is widely recognized, there have not been systematic efforts to develop approaches to reduce the pervasive uncertainty that affects the velocity profiles determined by the inversion. Non-uniqueness cannot be easily studied in a nonlinear inverse problem such as Rayleigh-wave inversion and the only way to understand its nature is by numerical investigation which can get computationally expensive and inevitably time consuming. Regarding the variety of the parameters affecting the surface wave inversion and possible non-uniqueness induced by them, a technique should be established which is not controlled by the non-uniqueness that is already affecting the surface wave inversion. An efficient and repeatable technique is proposed and tested to overcome the non-uniqueness problem; multiple inverted shear-wave velocity profiles are used in a wavenumber integration technique to generate synthetic time series resembling the geophone recordings. The similarity between synthetic and observed time series is used as an additional tool along with the similarity between the theoretical and experimental dispersion curves. The proposed method is proven to be effective through synthetic and real world examples. In these examples, the nature of the non-uniqueness is discussed and its existence is shown. Using the proposed technique, inverted velocity profiles are estimated and effectiveness of this technique is evaluated; in the synthetic example, final inverted velocity profile is compared with the initial target velocity model, and in the real world example, final inverted shear-wave velocity profile is compared with the velocity model from independent measurements in a nearby borehole. Real world example shows that it is possible to overcome the non-uniqueness and distinguish the representative velocity profile for the site that also matches well with the borehole measurements.
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
Development of a Prototype Model-Form Uncertainty Knowledge Base
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2016-01-01
Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.
NASA Astrophysics Data System (ADS)
Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje
2016-04-01
Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization (including the smoke injection model and distinguishing all relevant vegetation types) can improve the predictions. The other significant parameter, so far weakly addressed in fire emission inventories, is the size spectrum of the emitted aerosols. Direct size-resolving measurements showed, for instance, that smoke from smouldering fires has smaller particles as compares with smoke from flaming fires. Due to dependence of the smoke optical thickness on the size distribution, such variability can lead to significant changes in the top-down calibration step. Experiments with IS4FIRES-SILAM system manifested up to a factor of two difference in AOD, depending on the assumption on particle spectrum.
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Moore, Jennifer A.; Draheim, Hope M.; Etter, Dwayne; Winterstein, Scott; Scribner, Kim T.
2014-01-01
Understanding the factors that affect dispersal is a fundamental question in ecology and conservation biology, particularly as populations are faced with increasing anthropogenic impacts. Here we collected georeferenced genetic samples (n = 2,540) from three generations of black bears (Ursus americanus) harvested in a large (47,739 km2), geographically isolated population and used parentage analysis to identify mother-offspring dyads (n = 337). We quantified the effects of sex, age, habitat type and suitability, and local harvest density at the natal and settlement sites on the probability of natal dispersal, and on dispersal distances. Dispersal was male-biased (76% of males dispersed) but a small proportion (21%) of females also dispersed, and female dispersal distances (mean ± SE = 48.9±7.7 km) were comparable to male dispersal distances (59.0±3.2 km). Dispersal probabilities and dispersal distances were greatest for bears in areas with high habitat suitability and low harvest density. The inverse relationship between dispersal and harvest density in black bears suggests that 1) intensive harvest promotes restricted dispersal, or 2) high black bear population density decreases the propensity to disperse. Multigenerational genetic data collected over large landscape scales can be a powerful means of characterizing dispersal patterns and causal associations with demographic and landscape features in wild populations of elusive and wide-ranging species. PMID:24621593
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Analysis of uncertainties in turbine metal temperature predictions
NASA Technical Reports Server (NTRS)
Stepka, F. S.
1980-01-01
An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.
1984-01-01
Recent investigations suggest that dispersion in aquifers is scale dependent and a function of the heterogeneity of aquifer materials. Theoretical stochastic studies indicate that determining hydraulic-conductivity variability in three dimensions is important in analyzing the dispersion process. Even though field methods are available to approximate hydraulic conductivity in three dimensions, the methods are not generally used because of high cost of field equipment and because measurement and analysis techniques are cumbersome and time consuming. The hypothesis of this study is that field-determined values of dispersivity are scale dependent and that they may be described as a function of hydraulic conductivity in three dimensions. The objectives of the study at the Bemidji research site are to (1) determine hydraulic conductivity of the porous media in three dimensions, (2) determine field values of dispersivity and its scale dependence on hydraulic conductivity, and (3) develop and apply a computerized data-collection, storage, and analysis system for field use in comprehensive determination of hydraulic conductivity and dispersivity. Plans for this investigation involve a variety of methods of analysis. Hydraulic conductivity will be determined separately in the horizontal and vertical planes of the hydraulic-conductivity ellipsoid. Field values of dispersivity will be determined by single-well and doublet-well injection or withdrawal tests with tracers. A computerized data-collection, storage, and analysis system to measure pressure, flow rate, tracer concentrations, and temperature will be designed for field testing. Real-time computer programs will be used to analyze field data. The initial methods of analysis will be utilized to meet the objectives of the study. Preliminary field data indicate the aquifer underlying the Bemidji site is vertically heterogeneous, cross-bedded outwash. Preliminary analysis of the flow field around a hypothetical doublet-well tracer test indicates that the location of the wells can affect the field value of dispersivity. Preliminary analysis also indicates that different values of dispersivity may result from anisotropic conditions in tests in which observation wells are located at equal radial distances from either the injection or withdrawal well.
Bennett, Ryan C; Brough, Chris; Miller, Dave A; O'Donnell, Kevin P; Keen, Justin M; Hughey, Justin R; Williams, Robert O; McGinity, James W
2015-03-01
Acetyl-11-keto-β-boswellic acid (AKBA), a gum resin extract, possesses poor water-solubility that limits bioavailability and a high melting point making it difficult to successfully process into solid dispersions by fusion methods. The purpose of this study was to investigate solvent and thermal processing techniques for the preparation of amorphous solid dispersions (ASDs) exhibiting enhanced solubility, dissolution rates and bioavailability. Solid dispersions were successfully produced by rotary evaporation (RE) and KinetiSol® Dispersing (KSD). Solid state and chemical characterization revealed that ASD with good potency and purity were produced by both RE and KSD. Results of the RE studies demonstrated that AQOAT®-LF, AQOAT®-MF, Eudragit® L100-55 and Soluplus with the incorporation of dioctyl sulfosuccinate sodium provided substantial solubility enhancement. Non-sink dissolution analysis showed enhanced dissolution properties for KSD-processed solid dispersions in comparison to RE-processed solid dispersions. Variances in release performance were identified when different particle size fractions of KSD samples were analyzed. Selected RE samples varying in particle surface morphologies were placed under storage and exhibited crystalline growth following solid-state stability analysis at 12 months in comparison to stored KSD samples confirming amorphous instability for RE products. In vivo analysis of KSD-processed solid dispersions revealed significantly enhanced AKBA absorption in comparison to the neat, active substance.
Relating Data and Models to Characterize Parameter and Prediction Uncertainty
Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; de Bruijn, Karin; Bouwer, Laurens; de Moel, Hans
2015-04-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. This Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. This uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.
2015-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract
NASA Astrophysics Data System (ADS)
Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang
2017-01-01
This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Kriston, Levente; Meister, Ramona
2014-03-01
Judging applicability (relevance) of meta-analytical findings to particular clinical decision-making situations remains challenging. We aimed to describe an evidence synthesis method that accounts for possible uncertainty regarding applicability of the evidence. We conceptualized uncertainty regarding applicability of the meta-analytical estimates to a decision-making situation as the result of uncertainty regarding applicability of the findings of the trials that were included in the meta-analysis. This trial-level applicability uncertainty can be directly assessed by the decision maker and allows for the definition of trial inclusion probabilities, which can be used to perform a probabilistic meta-analysis with unequal probability resampling of trials (adaptive meta-analysis). A case study with several fictitious decision-making scenarios was performed to demonstrate the method in practice. We present options to elicit trial inclusion probabilities and perform the calculations. The result of an adaptive meta-analysis is a frequency distribution of the estimated parameters from traditional meta-analysis that provides individually tailored information according to the specific needs and uncertainty of the decision maker. The proposed method offers a direct and formalized combination of research evidence with individual clinical expertise and may aid clinicians in specific decision-making situations. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE R&D Accomplishments Database
Salam, A.
1956-04-01
Lectures with mathematical analysis are given on Dispersion Theory and Causality and Dispersion Relations for Pion-nucleon Scattering. The appendix includes the S-matrix in terms of Heisenberg Operators. (F. S.)
Uncertainty in BRCA1 cancer susceptibility testing.
Baty, Bonnie J; Dudley, William N; Musters, Adrian; Kinney, Anita Y
2006-11-15
This study investigated uncertainty in individuals undergoing genetic counseling/testing for breast/ovarian cancer susceptibility. Sixty-three individuals from a single kindred with a known BRCA1 mutation rated uncertainty about 12 items on a five-point Likert scale before and 1 month after genetic counseling/testing. Factor analysis identified a five-item total uncertainty scale that was sensitive to changes before and after testing. The items in the scale were related to uncertainty about obtaining health care, positive changes after testing, and coping well with results. The majority of participants (76%) rated reducing uncertainty as an important reason for genetic testing. The importance of reducing uncertainty was stable across time and unrelated to anxiety or demographics. Yet, at baseline, total uncertainty was low and decreased after genetic counseling/testing (P = 0.004). Analysis of individual items showed that after genetic counseling/testing, there was less uncertainty about the participant detecting cancer early (P = 0.005) and coping well with their result (P < 0.001). Our findings support the importance to clients of genetic counseling/testing as a means of reducing uncertainty. Testing may help clients to reduce the uncertainty about items they can control, and it may be important to differentiate the sources of uncertainty that are more or less controllable. Genetic counselors can help clients by providing anticipatory guidance about the role of uncertainty in genetic testing. (c) 2006 Wiley-Liss, Inc.
Rayleigh-wave dispersive energy imaging using a high-resolution linear radon transform
Luo, Y.; Xia, J.; Miller, R.D.; Xu, Y.; Liu, J.; Liu, Q.
2008-01-01
Multichannel Analysis of Surface Waves (MASW) analysis is an efficient tool to obtain the vertical shear-wave profile. One of the key steps in the MASW method is to generate an image of dispersive energy in the frequency-velocity domain, so dispersion curves can be determined by picking peaks of dispersion energy. In this paper, we propose to image Rayleigh-wave dispersive energy by high-resolution linear Radon transform (LRT). The shot gather is first transformed along the time direction to the frequency domain and then the Rayleigh-wave dispersive energy can be imaged by high-resolution LRT using a weighted preconditioned conjugate gradient algorithm. Synthetic data with a set of linear events are presented to show the process of generating dispersive energy. Results of synthetic and real-world examples demonstrate that, compared with the slant stacking algorithm, high-resolution LRT can improve the resolution of images of dispersion energy by more than 50%. ?? Birkhaueser 2008.
NASA Astrophysics Data System (ADS)
Qiao, Min; Ran, Qianping; Wu, Shishan
2018-03-01
A kind of novel surfactant with star-like molecular structure and terminated sulfonate was synthesized, and it was used as the dispersant for multi-walled carbon nanotubes (CNTs) in aqueous suspensions compared with a traditional single-chained surfactant. The star-like surfactant showed good dispersing ability for multi-walled CNTs in aqueous suspensions. Surface tension analysis, total organic carbon analysis, X-ray photoelectron spectroscopy, zeta potential, dynamic light scattering and transmission electron microscopy were performed to research the effect of star-like surfactant on the dispersion of multi-walled CNTs in aqueous suspensions. With the assistance of star-like surfactant, the CNTs could disperse well in aqueous suspension at high concentration of 50 g/L for more than 30 days, while the CNTs precipitated completely in aqueous suspension after 1 day without any dispersant or after 10 days with sodium 4-dodecylbenzenesulfonic acid as dispersant.
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2013-12-01
This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.
Uncertainties in Forecasting Streamflow using Entropy Theory
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2017-12-01
Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.
Etkind, Simon Noah; Bristowe, Katherine; Bailey, Katharine; Selman, Lucy Ellen; Murtagh, Fliss Em
2017-02-01
Uncertainty is common in advanced illness but is infrequently studied in this context. If poorly addressed, uncertainty can lead to adverse patient outcomes. We aimed to understand patient experiences of uncertainty in advanced illness and develop a typology of patients' responses and preferences to inform practice. Secondary analysis of qualitative interview transcripts. Studies were assessed for inclusion and interviews were sampled using maximum-variation sampling. Analysis used a thematic approach with 10% of coding cross-checked to enhance reliability. Qualitative interviews from six studies including patients with heart failure, chronic obstructive pulmonary disease, renal disease, cancer and liver failure. A total of 30 transcripts were analysed. Median age was 75 (range, 43-95), 12 patients were women. The impact of uncertainty was frequently discussed: the main related themes were engagement with illness, information needs, patient priorities and the period of time that patients mainly focused their attention on (temporal focus). A typology of patient responses to uncertainty was developed from these themes. Uncertainty influences patient experience in advanced illness through affecting patients' information needs, preferences and future priorities for care. Our typology aids understanding of how patients with advanced illness respond to uncertainty. Assessment of these three factors may be a useful starting point to guide clinical assessment and shared decision making.
The Uncertainties on the GIS Based Land Suitability Assessment for Urban and Rural Planning
NASA Astrophysics Data System (ADS)
Liu, H.; Zhan, Q.; Zhan, M.
2017-09-01
The majority of the research on the uncertainties of spatial data and spatial analysis focuses on some specific data feature or analysis tool. Few have accomplished the uncertainties of the whole process of an application like planning, making the research of uncertainties detached from practical applications. The paper discusses the uncertainties of the geographical information systems (GIS) based land suitability assessment in planning on the basis of literature review. The uncertainties considered range from index system establishment to the classification of the final result. Methods to reduce the uncertainties arise from the discretization of continuous raster data and the index weight determination are summarized. The paper analyzes the merits and demerits of the "Nature Breaks" method which is broadly used by planners. It also explores the other factors which impact the accuracy of the final classification like the selection of class numbers, intervals and the autocorrelation of the spatial data. In the conclusion part, the paper indicates that the adoption of machine learning methods should be modified to integrate the complexity of land suitability assessment. The work contributes to the application of spatial data and spatial analysis uncertainty research on land suitability assessment, and promotes the scientific level of the later planning and decision-making.
Luo, Y.; Xia, J.; Miller, R.D.; Liu, J.; Xu, Y.; Liu, Q.
2008-01-01
Multichannel Analysis of Surface Waves (MASW) analysis is an efficient tool to obtain the vertical shear-wave profile. One of the key steps in the MASW method is to generate an image of dispersive energy in the frequency-velocity domain, so dispersion curves can be determined by picking peaks of dispersion energy. In this paper, we image Rayleigh-wave dispersive energy and separate multimodes from a multichannel record by high-resolution linear Radon transform (LRT). We first introduce Rayleigh-wave dispersive energy imaging by high-resolution LRT. We then show the process of Rayleigh-wave mode separation. Results of synthetic and real-world examples demonstrate that (1) compared with slant stacking algorithm, high-resolution LRT can improve the resolution of images of dispersion energy by more than 50% (2) high-resolution LRT can successfully separate multimode dispersive energy of Rayleigh waves with high resolution; and (3) multimode separation and reconstruction expand frequency ranges of higher mode dispersive energy, which not only increases the investigation depth but also provides a means to accurately determine cut-off frequencies.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
DRAINMOD-GIS: a lumped parameter watershed scale drainage and water quality model
G.P. Fernandez; G.M. Chescheir; R.W. Skaggs; D.M. Amatya
2006-01-01
A watershed scale lumped parameter hydrology and water quality model that includes an uncertainty analysis component was developed and tested on a lower coastal plain watershed in North Carolina. Uncertainty analysis was used to determine the impacts of uncertainty in field and network parameters of the model on the predicted outflows and nitrate-nitrogen loads at the...
Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Ely, Jeffry W.
2012-01-01
A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holmquist, J. R.; Crooks, S.; Windham-Myers, L.; Megonigal, P.; Weller, D.; Lu, M.; Bernal, B.; Byrd, K. B.; Morris, J. T.; Troxler, T.; McCombs, J.; Herold, N.
2017-12-01
Stable coastal wetlands can store substantial amounts of carbon (C) that can be released when they are degraded or eroded. The EPA recently incorporated coastal wetland net-storage and emissions within the Agricultural Forested and Other Land Uses category of the U.S. National Greenhouse Gas Inventory (NGGI). This was a seminal analysis, but its quantification of uncertainty needs improvement. We provide a value-added analysis by estimating that uncertainty, focusing initially on the most basic assumption, the area of coastal wetlands. We considered three sources: uncertainty in the areas of vegetation and salinity subclasses, uncertainty in the areas of changing or stable wetlands, and uncertainty in the inland extent of coastal wetlands. The areas of vegetation and salinity subtypes, as well as stable or changing, were estimated from 2006 and 2010 maps derived from Landsat imagery by the Coastal Change Analysis Program (C-CAP). We generated unbiased area estimates and confidence intervals for C-CAP, taking into account mapped area, proportional areas of commission and omission errors, as well as the number of observations. We defined the inland extent of wetlands as all land below the current elevation of twice monthly highest tides. We generated probabilistic inundation maps integrating wetland-specific bias and random error in light-detection and ranging elevation maps, with the spatially explicit random error in tidal surfaces generated from tide gauges. This initial uncertainty analysis will be extended to calculate total propagated uncertainty in the NGGI by including the uncertainties in the amount of C lost from eroded and degraded wetlands, stored annually in stable wetlands, and emitted in the form of methane by tidal freshwater wetlands.
A comparison of some spectrograms obtained with a Reticon and by coaddition of photographic plates
NASA Technical Reports Server (NTRS)
Adelman, Saul J.
1989-01-01
High-dispersion 2.4 A/mm spectra with signal-to-noise ratios of order 80 were obtained for three stars by using a Reticon detector and by coadding photographic spectrograms at the Dominion Astrophysical Observatory. Metal lines of equivalent widths 5 to 75 mA in Alpha Dra and Iota CrB show systematic differences of order 4 percent with an uncertainty of order 3 percent and an rms scatter of 2.0 to 3.7 mA about the mean equivalent-width differences.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
The potential for meta-analysis to support decision analysis in ecology.
Mengersen, Kerrie; MacNeil, M Aaron; Caley, M Julian
2015-06-01
Meta-analysis and decision analysis are underpinned by well-developed methods that are commonly applied to a variety of problems and disciplines. While these two fields have been closely linked in some disciplines such as medicine, comparatively little attention has been paid to the potential benefits of linking them in ecology, despite reasonable expectations that benefits would be derived from doing so. Meta-analysis combines information from multiple studies to provide more accurate parameter estimates and to reduce the uncertainty surrounding them. Decision analysis involves selecting among alternative choices using statistical information that helps to shed light on the uncertainties involved. By linking meta-analysis to decision analysis, improved decisions can be made, with quantification of the costs and benefits of alternate decisions supported by a greater density of information. Here, we briefly review concepts of both meta-analysis and decision analysis, illustrating the natural linkage between them and the benefits from explicitly linking one to the other. We discuss some examples in which this linkage has been exploited in the medical arena and how improvements in precision and reduction of structural uncertainty inherent in a meta-analysis can provide substantive improvements to decision analysis outcomes by reducing uncertainty in expected loss and maximising information from across studies. We then argue that these significant benefits could be translated to ecology, in particular to the problem of making optimal ecological decisions in the face of uncertainty. Copyright © 2013 John Wiley & Sons, Ltd.
A multi-model assessment of terrestrial biosphere model data needs
NASA Astrophysics Data System (ADS)
Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.
2017-12-01
Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Removal of Asperger's syndrome from the DSM V: community response to uncertainty.
Parsloe, Sarah M; Babrow, Austin S
2016-01-01
The May 2013 release of the new version of the Diagnostic and Statistical Manual of Mental Disorders (DSM V) subsumed Asperger's syndrome under the wider diagnostic label of autism spectrum disorder (ASD). The revision has created much uncertainty in the community affected by this condition. This study uses problematic integration theory and thematic analysis to investigate how participants in Wrong Planet, a large online community associated with autism and Asperger's syndrome, have constructed these uncertainties. The analysis illuminates uncertainties concerning both the likelihood of diagnosis and value of diagnosis, and it details specific issues within these two general areas of uncertainty. The article concludes with both conceptual and practical implications.
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
Proton and neutron electromagnetic form factors and uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Zhihong; Arrington, John; Hill, Richard J.
We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.
Proton and neutron electromagnetic form factors and uncertainties
Ye, Zhihong; Arrington, John; Hill, Richard J.; ...
2017-12-06
We determine the nucleon electromagnetic form factors and their uncertainties from world electron scattering data. The analysis incorporates two-photon exchange corrections, constraints on the low-Q 2 and high-Q 2 behavior, and additional uncertainties to account for tensions between different data sets and uncertainties in radiative corrections.