Pretest uncertainty analysis for chemical rocket engine tests
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1987-01-01
A parametric pretest uncertainty analysis has been performed for a chemical rocket engine test at a unique 1000:1 area ratio altitude test facility. Results from the parametric study provide the error limits required in order to maintain a maximum uncertainty of 1 percent on specific impulse. Equations used in the uncertainty analysis are presented.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Parametric robust control and system identification: Unified approach
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1994-01-01
Despite significant advancement in the area of robust parametric control, the problem of synthesizing such a controller is still a wide open problem. Thus, we attempt to give a solution to this important problem. Our approach captures the parametric uncertainty as an H(sub infinity) unstructured uncertainty so that H(sub infinity) synthesis techniques are applicable. Although the techniques cannot cope with the exact parametric uncertainty, they give a reasonable guideline to model the unstructured uncertainty that contains the parametric uncertainty. An additional loop shaping technique is also introduced to relax its conservatism.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
Parametric uncertainties in global model simulations of black carbon column mass concentration
NASA Astrophysics Data System (ADS)
Pearce, Hana; Lee, Lindsay; Reddington, Carly; Carslaw, Ken; Mann, Graham
2016-04-01
Previous studies have deduced that the annual mean direct radiative forcing from black carbon (BC) aerosol may regionally be up to 5 W m-2 larger than expected due to underestimation of global atmospheric BC absorption in models. We have identified the magnitude and important sources of parametric uncertainty in simulations of BC column mass concentration from a global aerosol microphysics model (GLOMAP-Mode). A variance-based uncertainty analysis of 28 parameters has been performed, based on statistical emulators trained on model output from GLOMAP-Mode. This is the largest number of uncertain model parameters to be considered in a BC uncertainty analysis to date and covers primary aerosol emissions, microphysical processes and structural parameters related to the aerosol size distribution. We will present several recommendations for further research to improve the fidelity of simulated BC. In brief, we find that the standard deviation around the simulated mean annual BC column mass concentration varies globally between 2.5 x 10-9 g cm-2 in remote marine regions and 1.25 x 10-6 g cm-2 near emission sources due to parameter uncertainty Between 60 and 90% of the variance over source regions is due to uncertainty associated with primary BC emission fluxes, including biomass burning, fossil fuel and biofuel emissions. While the contributions to BC column uncertainty from microphysical processes, for example those related to dry and wet deposition, are increased over remote regions, we find that emissions still make an important contribution in these areas. It is likely, however, that the importance of structural model error, i.e. differences between models, is greater than parametric uncertainty. We have extended our analysis to emulate vertical BC profiles at several locations in the mid-Pacific Ocean and identify the parameters contributing to uncertainty in the vertical distribution of black carbon at these locations. We will present preliminary comparisons of emulated BC vertical profiles from the AeroCom multi-model ensemble and Hiaper Pole-to-Pole (HIPPO) observations.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
Robust stability of fractional order polynomials with complicated uncertainty structure
Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryukhin, V. V., E-mail: bryuhin@yandex.ru; Kurakin, K. Yu.; Uvakin, M. A.
The article covers the uncertainty analysis of the physical calculations of the VVER reactor core for different meshes of the reference values of the feedback parameters (FBP). Various numbers of nodes of the parametric axes of FBPs and different ranges between them are investigated. The uncertainties of the dynamic calculations are analyzed using RTS RCCA ejection as an example within the framework of the model with the boundary conditions at the core inlet and outlet.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak
2011-01-01
This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty
NASA Astrophysics Data System (ADS)
Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl
2012-05-01
The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Quantifying parametric uncertainty in the Rothermel model
S. Goodrick
2008-01-01
The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Sun, Yun-Hsiang; Sun, Yuming; Wu, Christine Qiong; Sepehri, Nariman
2018-04-01
Parameters of friction model identified for a specific control system development are not constants. They vary over time and have a significant effect on the control system stability. Although much research has been devoted to the stability analysis under parametric uncertainty, less attention has been paid to incorporating a realistic friction model into their analysis. After reviewing the common friction models for controller design, a modified LuGre friction model is selected to carry out the stability analysis in this study. Two parameters of the LuGre model, namely σ0 and σ1, are critical to the demonstration of dynamic friction features, yet the identification of which is difficult to carry out, resulting in a high level of uncertainties in their values. Aiming at uncovering the effect of the σ0 and σ1 variations on the control system stability, a servomechanism with modified LuGre friction model is investigated. Two set-point position controllers are synthesised based on the servomechanism model to form two case studies. Through Lyapunov exponents, it is clear that the variation of σ0 and σ1 has an obvious effect on the stabiltiy of the studied systems and should not be overlooked in the design phase.
Uncertainties related to the representation of momentum transport in shallow convection
NASA Astrophysics Data System (ADS)
Schlemmer, Linda; Bechtold, Peter; Sandu, Irina; Ahlgrimm, Maike
2017-04-01
The vertical transport of horizontal momentum by convection has an important impact on the general circulation of the atmosphere as well as on the life cycle and track of cyclones. So far convective momentum transport (CMT) has mostly been studied for deep convection, whereas little is known about its characteristics and importance in shallow convection. In this study CMT by shallow convection is investigated by analyzing both data from large-eddy simulations (LES) and simulations performed with the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). In addition, the central terms underlying the bulk mass-flux parametrization of CMT are evaluated offline. Further, the uncertainties related to the representation of CMT are explored by running the stochastically perturbed parametrizations (SPP) approach of the IFS. The analyzed cases exhibit shallow convective clouds developing within considerable low-level wind shear. Analysis of the momentum fluxes in the LES data reveals significant momentum transport by the convection in both cases, which is directed down-gradient despite substantial organization of the cloud field. A detailed inspection of the convection parametrization reveals a very good representation of the entrainment and detrainment rates and an appropriate representation of the convective mass and momentum fluxes. To determine the correct values of mass-flux and in-cloud momentum at the cloud base in the parametrization yet remains challenging. The spread in convection-related quantities generated by the SPP is reasonable and addresses many of the identified uncertainties.
NASA Technical Reports Server (NTRS)
Sanchez Pena, Ricardo S.; Sideris, Athanasios
1988-01-01
A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Boithias, Laurie; Terrado, Marta; Corominas, Lluís; Ziv, Guy; Kumar, Vikas; Marqués, Montse; Schuhmacher, Marta; Acuña, Vicenç
2016-02-01
Ecosystem services provide multiple benefits to human wellbeing and are increasingly considered by policy-makers in environmental management. However, the uncertainty related with the monetary valuation of these benefits is not yet adequately defined or integrated by policy-makers. Given this background, our aim was to quantify different sources of uncertainty when performing monetary valuation of ecosystem services, in order to provide a series of guidelines to reduce them. With an example of 4 ecosystem services (i.e., water provisioning, waste treatment, erosion protection, and habitat for species) provided at the river basin scale, we quantified the uncertainty associated with the following sources: (1) the number of services considered, (2) the number of benefits considered for each service, (3) the valuation metrics (i.e. valuation methods) used to value benefits, and (4) the uncertainty of the parameters included in the valuation metrics. Results indicate that the highest uncertainty was caused by the number of services considered, as well as by the number of benefits considered for each service, whereas the parametric uncertainty was similar to the one related to the selection of valuation metric, thus suggesting that the parametric uncertainty, which is the only uncertainty type commonly considered, was less critical than the structural uncertainty, which is in turn mainly dependent on the decision-making context. Given the uncertainty associated to the valuation structure, special attention should be given to the selection of services, benefits and metrics according to a given context. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pun, Betty Kong-Ling
1998-12-01
Uncertainty is endemic in modeling. This thesis is a two- phase program to understand the uncertainties in urban air pollution model predictions and in field data used to validate them. Part I demonstrates how to improve atmospheric models by analyzing the uncertainties in these models and using the results to guide new experimentation endeavors. Part II presents an experiment designed to characterize atmospheric fluctuations, which have significant implications towards the model validation process. A systematic study was undertaken to investigate the effects of uncertainties in the SAPRC mechanism for gas- phase chemistry in polluted atmospheres. The uncertainties of more than 500 parameters were compiled, including reaction rate constants, product coefficients, organic composition, and initial conditions. Uncertainty propagation using the Deterministic Equivalent Modeling Method (DEMM) revealed that the uncertainties in ozone predictions can be up to 45% based on these parametric uncertainties. The key parameters found to dominate the uncertainties of the predictions include photolysis rates of NO2, O3, and formaldehyde; the rate constant for nitric acid formation; and initial amounts of NOx and VOC. Similar uncertainty analysis procedures applied to two other mechanisms used in regional air quality models led to the conclusion that in the presence of parametric uncertainties, the mechanisms cannot be discriminated. Research efforts should focus on reducing parametric uncertainties in photolysis rates, reaction rate constants, and source terms. A new tunable diode laser (TDL) infrared spectrometer was designed and constructed to measure multiple pollutants simultaneously in the same ambient air parcels. The sensitivities of the one hertz measurements were 2 ppb for ozone, 1 ppb for NO, and 0.5 ppb for NO2. Meteorological data were also collected for wind, temperature, and UV intensity. The field data showed clear correlations between ozone, NO, and NO2 in the one-second time scale. Fluctuations in pollutant concentrations were found to be extremely dependent on meteorological conditions. Deposition fluxes calculated using the Eddy Correlation technique were found to be small on concrete surfaces. These high time-resolution measurements were used to develop an understanding of the variability in atmospheric measurements, which would be useful in determining the acceptable discrepancy of model and observations. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Astrophysics Data System (ADS)
Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene
2014-11-01
Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.
Multi-parametric variational data assimilation for hydrological forecasting
NASA Astrophysics Data System (ADS)
Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.
2017-12-01
Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.
Identification and robust control of an experimental servo motor.
Adam, E J; Guestrin, E D
2002-04-01
In this work, the design of a robust controller for an experimental laboratory-scale position control system based on a dc motor drive as well as the corresponding identification and robust stability analysis are presented. In order to carry out the robust design procedure, first, a classic closed-loop identification technique is applied and then, the parametrization by internal model control is used. The model uncertainty is evaluated under both parametric and global representation. For the latter case, an interesting discussion about the conservativeness of this description is presented by means of a comparison between the uncertainty disk and the critical perturbation radius approaches. Finally, conclusions about the performance of the experimental system with the robust controller are discussed using comparative graphics of the controlled variable and the Nyquist stability margin as a robustness measurement.
Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.
Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael
2015-03-03
Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.
UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.
2012-01-01
UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Robust Control Design for Uncertain Nonlinear Dynamic Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis G.; Andrews, Lindsey; Giesy, Daniel P.
2012-01-01
Robustness to parametric uncertainty is fundamental to successful control system design and as such it has been at the core of many design methods developed over the decades. Despite its prominence, most of the work on robust control design has focused on linear models and uncertainties that are non-probabilistic in nature. Recently, researchers have acknowledged this disparity and have been developing theory to address a broader class of uncertainties. This paper presents an experimental application of robust control design for a hybrid class of probabilistic and non-probabilistic parametric uncertainties. The experimental apparatus is based upon the classic inverted pendulum on a cart. The physical uncertainty is realized by a known additional lumped mass at an unknown location on the pendulum. This unknown location has the effect of substantially altering the nominal frequency and controllability of the nonlinear system, and in the limit has the capability to make the system neutrally stable and uncontrollable. Another uncertainty to be considered is a direct current motor parameter. The control design objective is to design a controller that satisfies stability, tracking error, control power, and transient behavior requirements for the largest range of parametric uncertainties. This paper presents an overview of the theory behind the robust control design methodology and the experimental results.
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
NASA Astrophysics Data System (ADS)
Shi, X.; Zhang, G.
2013-12-01
Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
USDA-ARS?s Scientific Manuscript database
Hydrologic models are used to simulate the responses of agricultural systems to different inputs and management strategies to identify alternative management practices to cope up with future climate and/or geophysical changes. The Agricultural Policy/Environmental eXtender (APEX) is a model develope...
Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins
NASA Technical Reports Server (NTRS)
Brenner, Marty; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.
Minimum Uncertainty Coherent States Attached to Nondegenerate Parametric Amplifiers
NASA Astrophysics Data System (ADS)
Dehghani, A.; Mojaveri, B.
2015-06-01
Exact analytical solutions for the two-mode nondegenerate parametric amplifier have been obtained by using the transformation from the two-dimensional harmonic oscillator Hamiltonian. Some important physical properties such as quantum statistics and quadrature squeezing of the corresponding states are investigated. In addition, these states carry classical features such as Poissonian statistics and minimize the Heisenberg uncertainty relation of a pair of the coordinate and the momentum operators.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Uncertainties in atmospheric muon-neutrino fluxes arising from cosmic-ray primaries
NASA Astrophysics Data System (ADS)
Evans, Justin; Garcia Gamez, Diego; Porzio, Salvatore Davide; Söldner-Rembold, Stefan; Wren, Steven
2017-01-01
We present an updated calculation of the uncertainties on the atmospheric muon-neutrino flux arising from cosmic-ray primaries. For the first time, we include recent measurements of the cosmic-ray primaries collected since 2005. We apply a statistical technique that allows the determination of correlations between the parameters of the Gaisser, Stanev, Honda, and Lipari primary-flux parametrization and the incorporation of these correlations into the uncertainty on the muon-neutrino flux. We obtain an uncertainty related to the primary cosmic rays of around (5-15)%, depending on energy, which is about a factor of 2 smaller than the previously determined uncertainty. The hadron production uncertainty is added in quadrature to obtain the total uncertainty on the neutrino flux, which is reduced by ≈5 % . To take into account an unexpected hardening of the spectrum of primaries above energies of 100 GeV observed in recent measurements, we propose an alternative parametrization and discuss its impact on the neutrino flux uncertainties.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
Synthesis and Control of Flexible Systems with Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Lim, Kyong B.
2009-01-01
An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.
Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy.
Yin, Xiuxing; Pan, Li
2018-01-01
A direct adaptive robust tracking control is proposed for trajectory tracking of 6 DOF industrial robot in the presence of parametric uncertainties, external disturbances and uncertain nonlinearities. The controller is designed based on the dynamic characteristics in the working space of the end-effector of the 6 DOF robot. The controller includes robust control term and model compensation term that is developed directly based on the input reference or desired motion trajectory. A projection-type parametric adaptation law is also designed to compensate for parametric estimation errors for the adaptive robust control. The feasibility and effectiveness of the proposed direct adaptive robust control law and the associated projection-type parametric adaptation law have been comparatively evaluated based on two 6 DOF industrial robots. The test results demonstrate that the proposed control can be employed to better maintain the desired trajectory tracking even in the presence of large parametric uncertainties and external disturbances as compared with PD controller and nonlinear controller. The parametric estimates also eventually converge to the real values along with the convergence of tracking errors, which further validate the effectiveness of the proposed parametric adaption law. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Constraints on spin-dependent parton distributions at large x from global QCD analysis
Jimenez-Delgado, P.; Avakian, H.; Melnitchouk, W.
2014-09-28
This study investigate the behavior of spin-dependent parton distribution functions (PDFs) at large parton momentum fractions x in the context of global QCD analysis. We explore the constraints from existing deep-inelastic scattering data, and from theoretical expectations for the leading x → 1 behavior based on hard gluon exchange in perturbative QCD. Systematic uncertainties from the dependence of the PDFs on the choice of parametrization are studied by considering functional forms motivated by orbital angular momentum arguments. Finally, we quantify the reduction in the PDF uncertainties that may be expected from future high-x data from Jefferson Lab at 12 GeV.
Study of DNA binding sites using the Rényi parametric entropy measure.
Krishnamachari, A; moy Mandal, Vijnan; Karmeshu
2004-04-07
Shannon's definition of uncertainty or surprisal has been applied extensively to measure the information content of aligned DNA sequences and characterizing DNA binding sites. In contrast to Shannon's uncertainty, this study investigates the applicability and suitability of a parametric uncertainty measure due to Rényi. It is observed that this measure also provides results in agreement with Shannon's measure, pointing to its utility in analysing DNA binding site region. For facilitating the comparison between these uncertainty measures, a dimensionless quantity called "redundancy" has been employed. It is found that Rényi's measure at low parameter values possess a better delineating feature of binding sites (of binding regions) than Shannon's measure. The critical value of the parameter is chosen with an outlier criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Peter J.; Cheung, Jessica Y.; Chunnilall, Christopher J.
2010-04-10
We present a method for using the Hong-Ou-Mandel (HOM) interference technique to quantify photon indistinguishability within an associated uncertainty. The method allows the relative importance of various experimental factors affecting the HOM visibility to be identified, and enables the actual indistinguishability, with an associated uncertainty, to be estimated from experimentally measured quantities. A measurement equation has been derived that accounts for the non-ideal performance of the interferometer. The origin of each term of the equation is explained, along with procedures for their experimental evaluation and uncertainty estimation. These uncertainties are combined to give an overall uncertainty for the derived photonmore » indistinguishability. The analysis was applied to measurements from an interferometer sourced with photon pairs from a parametric downconversion process. The measured photon indistinguishably was found to be 0.954+/-0.036 by using the prescribed method.« less
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
NASA Astrophysics Data System (ADS)
Scradeanu, D.; Pagnejer, M.
2012-04-01
The purpose of the works is to evaluate the uncertainty of the hydrodynamic model for a multilayered geological structure, a potential trap for carbon dioxide storage. The hydrodynamic model is based on a conceptual model of the multilayered hydrostructure with three components: 1) spatial model; 2) parametric model and 3) energy model. The necessary data to achieve the three components of the conceptual model are obtained from: 240 boreholes explored by geophysical logging and seismic investigation, for the first two components, and an experimental water injection test for the last one. The hydrodinamic model is a finite difference numerical model based on a 3D stratigraphic model with nine stratigraphic units (Badenian and Oligocene) and a 3D multiparameter model (porosity, permeability, hydraulic conductivity, storage coefficient, leakage etc.). The uncertainty of the two 3D models was evaluated using multivariate geostatistical tools: a)cross-semivariogram for structural analysis, especially the study of anisotropy and b)cokriging to reduce estimation variances in a specific situation where is a cross-correlation between a variable and one or more variables that are undersampled. It has been identified important differences between univariate and bivariate anisotropy. The minimised uncertainty of the parametric model (by cokriging) was transferred to hydrodynamic model. The uncertainty distribution of the pressures generated by the water injection test has been additional filtered by the sensitivity of the numerical model. The obtained relative errors of the pressure distribution in the hydrodynamic model are 15-20%. The scientific research was performed in the frame of the European FP7 project "A multiple space and time scale approach for the quantification of deep saline formation for CO2 storage(MUSTANG)".
Strong-lensing analysis of A2744 with MUSE and Hubble Frontier Fields images
NASA Astrophysics Data System (ADS)
Mahler, G.; Richard, J.; Clément, B.; Lagattuta, D.; Schmidt, K.; Patrício, V.; Soucail, G.; Bacon, R.; Pello, R.; Bouwens, R.; Maseda, M.; Martinez, J.; Carollo, M.; Inami, H.; Leclercq, F.; Wisotzki, L.
2018-01-01
We present an analysis of Multi Unit Spectroscopic Explorer (MUSE) observations obtained on the massive Frontier Fields (FFs) cluster A2744. This new data set covers the entire multiply imaged region around the cluster core. The combined catalogue consists of 514 spectroscopic redshifts (with 414 new identifications). We use this redshift information to perform a strong-lensing analysis revising multiple images previously found in the deep FF images, and add three new MUSE-detected multiply imaged systems with no obvious Hubble Space Telescope counterpart. The combined strong-lensing constraints include a total of 60 systems producing 188 images altogether, out of which 29 systems and 83 images are spectroscopically confirmed, making A2744 one of the most well-constrained clusters to date. Thanks to the large amount of spectroscopic redshifts, we model the influence of substructures at larger radii, using a parametrization including two cluster-scale components in the cluster core and several group scale in the outskirts. The resulting model accurately reproduces all the spectroscopic multiple systems, reaching an rms of 0.67 arcsec in the image plane. The large number of MUSE spectroscopic redshifts gives us a robust model, which we estimate reduces the systematic uncertainty on the 2D mass distribution by up to ∼2.5 times the statistical uncertainty in the cluster core. In addition, from a combination of the parametrization and the set of constraints, we estimate the relative systematic uncertainty to be up to 9 per cent at 200 kpc.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
Tutsoy, Onder; Barkana, Duygun Erol; Tugal, Harun
2018-05-01
In this paper, an adaptive controller is developed for discrete time linear systems that takes into account parametric uncertainty, internal-external non-parametric random uncertainties, and time varying control signal delay. Additionally, the proposed adaptive control is designed in such a way that it is utterly model free. Even though these properties are studied separately in the literature, they are not taken into account all together in adaptive control literature. The Q-function is used to estimate long-term performance of the proposed adaptive controller. Control policy is generated based on the long-term predicted value, and this policy searches an optimal stabilizing control signal for uncertain and unstable systems. The derived control law does not require an initial stabilizing control assumption as in the ones in the recent literature. Learning error, control signal convergence, minimized Q-function, and instantaneous reward are analyzed to demonstrate the stability and effectiveness of the proposed adaptive controller in a simulation environment. Finally, key insights on parameters convergence of the learning and control signals are provided. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Trends and associated uncertainty in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, A. N.; Moyer, E. J.; Stein, M.
2016-12-01
Physical models suggest that the Earth's mean temperature warms in response to changing CO2 concentrations (and hence increased radiative forcing); given physical uncertainties in this relationship, the historical temperature record is a source of empirical information about global warming. A persistent thread in many analyses of the historical temperature record, however, is the reliance on methods that appear to deemphasize both physical and statistical assumptions. Examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for natural variability in nonparametric rather than parametric ways. We show here that methods that deemphasize assumptions can limit the scope of analysis and can lead to misleading inferences, particularly in the setting considered where the data record is relatively short and the scale of temporal correlation is relatively long. A proposed model that is simple but physically informed provides a more reliable estimate of trends and allows a broader array of questions to be addressed. In accounting for uncertainty, we also illustrate how parametric statistical models that are attuned to the important characteristics of natural variability can be more reliable than ostensibly more flexible approaches.
NASA Astrophysics Data System (ADS)
Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory
2018-02-01
We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.
NASA Astrophysics Data System (ADS)
Ataei-Esfahani, Armin
In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.
Robust output feedback stabilization for a flexible marine riser system.
Zhao, Zhijia; Liu, Yu; Guo, Fang
2017-12-06
The aim of this paper is to develop a boundary control for the vibration reduction of a flexible marine riser system in the presence of parametric uncertainties and system states obtained inaccurately. To this end, an adaptive output feedback boundary control is proposed to suppress the riser's vibration fusing with observer-based backstepping, high-gain observers and robust adaptive control theory. In addition, the parameter adaptive laws are designed to compensate for the system parametric uncertainties, and the disturbance observer is introduced to mitigate the effects of external environmental disturbance. The uniformly bounded stability of the closed-loop system is achieved through rigorous Lyapunov analysis without any discretisation or simplification of the dynamics in the time and space, and the state observer error is ensured to exponentially converge to zero as time grows to infinity. In the end, the simulation and comparison studies are carried out to illustrate the performance of the proposed control under the proper choice of the design parameters. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Uncertainty Modeling for Robustness Analysis of Control Upset Prevention and Recovery Systems
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Khong, Thuan H.; Shin, Jong-Yeob; Kwatny, Harry; Chang, Bor-Chin; Balas, Gary J.
2005-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems (developed for failure detection, identification, and reconfiguration, as well as upset recovery) need to be evaluated over broad regions of the flight envelope and under extreme flight conditions, and should include various sources of uncertainty. However, formulation of linear fractional transformation (LFT) models for representing system uncertainty can be very difficult for complex parameter-dependent systems. This paper describes a preliminary LFT modeling software tool which uses a matrix-based computational approach that can be directly applied to parametric uncertainty problems involving multivariate matrix polynomial dependencies. Several examples are presented (including an F-16 at an extreme flight condition, a missile model, and a generic example with numerous crossproduct terms), and comparisons are given with other LFT modeling tools that are currently available. The LFT modeling method and preliminary software tool presented in this paper are shown to compare favorably with these methods.
Study of aerodynamic technology for single-cruise engine V/STOL fighter/attack aircraft
NASA Technical Reports Server (NTRS)
Driggers, H. H.; Powers, S. A.; Roush, R. T.
1982-01-01
A conceptual design analysis is performed on a single engine V/STOL supersonic fighter/attack concept powered by a series flow tandem fan propulsion system. Forward and aft mounted fans have independent flow paths for V/STOL operation and series flow in high speed flight. Mission, combat and V/STOL performance is calculated. Detailed aerodynamic estimates are made and aerodynamic uncertainties associated with the configuration and estimation methods identified. A wind tunnel research program is developed to resolve principal uncertainties and establish a data base for the baseline configuration and parametric variations.
Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M
2016-01-01
Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
NASA Astrophysics Data System (ADS)
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Parametric Robust Control and System Identification: Unified Approach
NASA Technical Reports Server (NTRS)
Keel, L. H.
1996-01-01
During the period of this support, a new control system design and analysis method has been studied. This approach deals with control systems containing uncertainties that are represented in terms of its transfer function parameters. Such a representation of the control system is common and many physical parameter variations fall into this type of uncertainty. Techniques developed here are capable of providing nonconservative analysis of such control systems with parameter variations. We have also developed techniques to deal with control systems when their state space representations are given rather than transfer functions. In this case, the plant parameters will appear as entries of state space matrices. Finally, a system modeling technique to construct such systems from the raw input - output frequency domain data has been developed.
NASA Astrophysics Data System (ADS)
Simkins, J.; Desai, A. R.; Cowdery, E.; Dietze, M.; Rollinson, C.
2016-12-01
The terrestrial biosphere assimilates nearly one fourth of anthropogenic carbon dioxide emissions, providing a significant ecosystem service. Anthropogenic climate changes that influence the distribution and frequency of weather extremes and can have a momentous impact on this useful function that ecosystems provide. However, most analyses of the impact of extreme events on ecosystem carbon uptake do not integrate across the wide range of structural, parametric, and driver uncertainty that needs to be taken into account to estimate probability of changes to ecosystem function under shifts in climate patterns. In order to improve ecosystem model forecasts, we integrated and estimated these sources of uncertainty using an open-sourced informatics workflow, the Predictive ECosystem Analyzer (PEcAn, http://pecanproject.org). PEcAn allows any researcher to parameterize and run multiple ecosystem models and automate extraction of meteorological forcing and estimation of its uncertainty. Trait databases and a uniform protocol for parameterizing and driving models were used to test parametric and structural uncertainty. In order to sample the uncertainty in future projected meteorological drivers, we developed automated extraction routines to acquire site-level three-hourly Coupled Model Intercomparison Project 5 (CMIP5) forcing data from the Geophysical Fluid Dynamics Laboratory general circulation models (CM3, ESM2M, and ESM2G) across the r1i1p1, r3i1p1 and r5i1p1 ensembles and AR5 emission scenarios. We also implemented a site-level high temporal resolution downscaling technique for these forcings calibrated against half-hourly eddy covariance flux tower observations. Our hypothesis claims that parametric and driver uncertainty dominate over the model structural uncertainty. In order to test this, we partition the uncertainty budget on the ChEAS regional network of towers in Northern Wisconsin, USA where each tower is located in forest and wetland ecosystems.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
Analyzing the quality robustness of chemotherapy plans with respect to model uncertainties.
Hoffmann, Anna; Scherrer, Alexander; Küfer, Karl-Heinz
2015-01-01
Mathematical models of chemotherapy planning problems contain various biomedical parameters, whose values are difficult to quantify and thus subject to some uncertainty. This uncertainty propagates into the therapy plans computed on these models, which poses the question of robustness to the expected therapy quality. This work introduces a combined approach for analyzing the quality robustness of plans in terms of dosing levels with respect to model uncertainties in chemotherapy planning. It uses concepts from multi-criteria decision making for studying parameters related to the balancing between the different therapy goals, and concepts from sensitivity analysis for the examination of parameters describing the underlying biomedical processes and their interplay. This approach allows for a profound assessment of a therapy plan, how stable its quality is with respect to parametric changes in the used mathematical model. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
NASA Astrophysics Data System (ADS)
Meinke, I.
2003-04-01
A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
NASA Astrophysics Data System (ADS)
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-04-01
Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
On-Line Robust Modal Stability Prediction using Wavelet Processing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.
Uncertainty relations, zero point energy and the linear canonical group
NASA Technical Reports Server (NTRS)
Sudarshan, E. C. G.
1993-01-01
The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Lombardi, A M
2017-09-18
Stochastic models provide quantitative evaluations about the occurrence of earthquakes. A basic component of this type of models are the uncertainties in defining main features of an intrinsically random process. Even if, at a very basic level, any attempting to distinguish between types of uncertainty is questionable, an usual way to deal with this topic is to separate epistemic uncertainty, due to lack of knowledge, from aleatory variability, due to randomness. In the present study this problem is addressed in the narrow context of short-term modeling of earthquakes and, specifically, of ETAS modeling. By mean of an application of a specific version of the ETAS model to seismicity of Central Italy, recently struck by a sequence with a main event of Mw6.5, the aleatory and epistemic (parametric) uncertainty are separated and quantified. The main result of the paper is that the parametric uncertainty of the ETAS-type model, adopted here, is much lower than the aleatory variability in the process. This result points out two main aspects: an analyst has good chances to set the ETAS-type models, but he may retrospectively describe and forecast the earthquake occurrences with still limited precision and accuracy.
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2014-01-01
Quantitative health impact assessment (HIA) is increasingly being used to assess the health impacts attributable to an environmental policy or intervention. As a consequence, there is a need to assess uncertainties in the assessments because of the uncertainty in the HIA models. In this paper, a framework is developed to quantify the uncertainty in the health impacts of environmental interventions and is applied to evaluate the impacts of poor housing ventilation. The paper describes the development of the framework through three steps: (i) selecting the relevant exposure metric and quantifying the evidence of potential health effects of the exposure; (ii) estimating the size of the population affected by the exposure and selecting the associated outcome measure; (iii) quantifying the health impact and its uncertainty. The framework introduces a novel application for the propagation of uncertainty in HIA, based on fuzzy set theory. Fuzzy sets are used to propagate parametric uncertainty in a non-probabilistic space and are applied to calculate the uncertainty in the morbidity burdens associated with three indoor ventilation exposure scenarios: poor, fair and adequate. The case-study example demonstrates how the framework can be used in practice, to quantify the uncertainty in health impact assessment where there is insufficient information to carry out a probabilistic uncertainty analysis. © 2013.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
Sommerfreund, J; Arhonditsis, G B; Diamond, M L; Frignani, M; Capodaglio, G; Gerino, M; Bellucci, L; Giuliani, S; Mugnai, C
2010-03-01
A Monte Carlo analysis is used to quantify environmental parametric uncertainty in a multi-segment, multi-chemical model of the Venice Lagoon. Scientific knowledge, expert judgment and observational data are used to formulate prior probability distributions that characterize the uncertainty pertaining to 43 environmental system parameters. The propagation of this uncertainty through the model is then assessed by a comparative analysis of the moments (central tendency, dispersion) of the model output distributions. We also apply principal component analysis in combination with correlation analysis to identify the most influential parameters, thereby gaining mechanistic insights into the ecosystem functioning. We found that modeled concentrations of Cu, Pb, OCDD/F and PCB-180 varied by up to an order of magnitude, exhibiting both contaminant- and site-specific variability. These distributions generally overlapped with the measured concentration ranges. We also found that the uncertainty of the contaminant concentrations in the Venice Lagoon was characterized by two modes of spatial variability, mainly driven by the local hydrodynamic regime, which separate the northern and central parts of the lagoon and the more isolated southern basin. While spatial contaminant gradients in the lagoon were primarily shaped by hydrology, our analysis also shows that the interplay amongst the in-place historical pollution in the central lagoon, the local suspended sediment concentrations and the sediment burial rates exerts significant control on the variability of the contaminant concentrations. We conclude that the probabilistic analysis presented herein is valuable for quantifying uncertainty and probing its cause in over-parameterized models, while some of our results can be used to dictate where additional data collection efforts should focus on and the directions that future model refinement should follow. (c) 2009 Elsevier Inc. All rights reserved.
Spatial planning using probabilistic flood maps
NASA Astrophysics Data System (ADS)
Alfonso, Leonardo; Mukolwe, Micah; Di Baldassarre, Giuliano
2015-04-01
Probabilistic flood maps account for uncertainty in flood inundation modelling and convey a degree of certainty in the outputs. Major sources of uncertainty include input data, topographic data, model structure, observation data and parametric uncertainty. Decision makers prefer less ambiguous information from modellers; this implies that uncertainty is suppressed to yield binary flood maps. Though, suppressing information may potentially lead to either surprise or misleading decisions. Inclusion of uncertain information in the decision making process is therefore desirable and transparent. To this end, we utilise the Prospect theory and information from a probabilistic flood map to evaluate potential decisions. Consequences related to the decisions were evaluated using flood risk analysis. Prospect theory explains how choices are made given options for which probabilities of occurrence are known and accounts for decision makers' characteristics such as loss aversion and risk seeking. Our results show that decision making is pronounced when there are high gains and loss, implying higher payoffs and penalties, therefore a higher gamble. Thus the methodology may be appropriately considered when making decisions based on uncertain information.
Funnel Libraries for Real-Time Robust Feedback Motion Planning
2016-07-21
motion plans for a robot that are guaranteed to suc- ceed despite uncertainty in the environment, parametric model uncertainty, and disturbances...resulting funnel library is then used to sequentially compose motion plans at runtime while ensuring the safety of the robot . A major advantage of...the work presented here is that by explicitly taking into account the effect of uncertainty, the robot can evaluate motion plans based on how vulnerable
Robust adaptive precision motion control of hydraulic actuators with valve dead-zone compensation.
Deng, Wenxiang; Yao, Jianyong; Ma, Dawei
2017-09-01
This paper addresses the high performance motion control of hydraulic actuators with parametric uncertainties, unmodeled disturbances and unknown valve dead-zone. By constructing a smooth dead-zone inverse, a robust adaptive controller is proposed via backstepping method, in which adaptive law is synthesized to deal with parametric uncertainties and a continuous nonlinear robust control law to suppress unmodeled disturbances. Since the unknown dead-zone parameters can be estimated by adaptive law and then the effect of dead-zone can be compensated effectively via inverse operation, improved tracking performance can be expected. In addition, the disturbance upper bounds can also be updated online by adaptive laws, which increases the controller operability in practice. The Lyapunov based stability analysis shows that excellent asymptotic output tracking with zero steady-state error can be achieved by the developed controller even in the presence of unmodeled disturbance and unknown valve dead-zone. Finally, the proposed control strategy is experimentally tested on a servovalve controlled hydraulic actuation system subjected to an artificial valve dead-zone. Comparative experimental results are obtained to illustrate the effectiveness of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Innanen, Kristopher A.; Geng, Yu
2018-06-01
Seismic full-waveform inversion (FWI) methods hold strong potential to recover multiple subsurface elastic properties for hydrocarbon reservoir characterization. Simultaneously updating multiple physical parameters introduces the problem of interparameter trade-off, arising from the simultaneous variations of different physical parameters, which increase the nonlinearity and uncertainty of multiparameter FWI. The coupling effects of different physical parameters are significantly influenced by model parametrization and acquisition arrangement. An appropriate choice of model parametrization is important to successful field data applications of multiparameter FWI. The objective of this paper is to examine the performance of various model parametrizations in isotropic-elastic FWI with walk-away vertical seismic profile (W-VSP) data for unconventional heavy oil reservoir characterization. Six model parametrizations are considered: velocity-density (α, β and ρ΄), modulus-density (κ, μ and ρ), Lamé-density (λ, μ΄ and ρ‴), impedance-density (IP, IS and ρ″), velocity-impedance-I (α΄, β΄ and I_P^' }) and velocity-impedance-II (α″, β″ and I_S^' }). We begin analysing the interparameter trade-off by making use of scattering radiation patterns, which is a common strategy for qualitative parameter resolution analysis. We discuss the advantages and limitations of the scattering radiation patterns and recommend that interparameter trade-offs be evaluated using interparameter contamination kernels, which provide quantitative, second-order measurements of the interparameter contaminations and can be constructed efficiently with an adjoint-state approach. Synthetic W-VSP isotropic-elastic FWI experiments in the time domain verify our conclusions about interparameter trade-offs for various model parametrizations. Density profiles are most strongly influenced by the interparameter contaminations; depending on model parametrization, the inverted density profile can be overestimated, underestimated or spatially distorted. Among the six cases, only the velocity-density parametrization provides stable and informative density features not included in the starting model. Field data applications of multicomponent W-VSP isotropic-elastic FWI in the time domain were also carried out. The heavy oil reservoir target zone, characterized by low α-to-β ratios and low Poisson's ratios, can be identified clearly with the inverted isotropic-elastic parameters.
NASA Astrophysics Data System (ADS)
Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan
2017-09-01
Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.
2011-03-01
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.
Coding of level of ambiguity within neural systems mediating choice.
Lopez-Paniagua, Dan; Seger, Carol A
2013-01-01
Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common "fronto-parietal-striatal" network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum).
Coding of level of ambiguity within neural systems mediating choice
Lopez-Paniagua, Dan; Seger, Carol A.
2013-01-01
Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common “fronto—parietal—striatal” network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum). PMID:24367286
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.
Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less
NASA Technical Reports Server (NTRS)
Shaw, Eric J.
2001-01-01
This paper will report on the activities of the IAA Launcher Systems Economics Working Group in preparations for its Launcher Systems Development Cost Behavior Study. The Study goals include: improve launcher system and other space system parametric cost analysis accuracy; improve launcher system and other space system cost analysis credibility; and provide launcher system and technology development program managers and other decisionmakers with useful information on development cost impacts of their decisions. The Working Group plans to explore at least the following five areas in the Study: define and explain development cost behavior terms and concepts for use in the Study; identify and quantify sources of development cost and cost estimating uncertainty; identify and quantify significant influences on development cost behavior; identify common barriers to development cost understanding and reduction; and recommend practical, realistic strategies to accomplish reductions in launcher system development cost.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Oddo, Perry C; Lee, Ben S; Garner, Gregory G; Srikrishnan, Vivek; Reed, Patrick M; Forest, Chris E; Keller, Klaus
2017-09-05
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies. © 2017 Society for Risk Analysis.
Compensation of distributed delays in integrated communication and control systems
NASA Technical Reports Server (NTRS)
Ray, Asok; Luck, Rogelio
1991-01-01
The concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensors, controller, and actuators within a control loop are discussed. With the objective of mitigating the detrimental effects of these network induced delays, a predictor-controller algorithm was formulated and analyzed. Robustness of the delay compensation algorithm was investigated relative to parametric uncertainties in plant modeling. The delay compensator was experimentally verified on an IEEE 802.4 network testbed for velocity control of a DC servomotor.
Model reference tracking control of an aircraft: a robust adaptive approach
NASA Astrophysics Data System (ADS)
Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan
2017-05-01
This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
Water Table Uncertainties due to Uncertainties in Structure and Properties of an Unconfined Aquifer.
Hauser, Juerg; Wellmann, Florian; Trefry, Mike
2018-03-01
We consider two sources of geology-related uncertainty in making predictions of the steady-state water table elevation for an unconfined aquifer. That is the uncertainty in the depth to base of the aquifer and in the hydraulic conductivity distribution within the aquifer. Stochastic approaches to hydrological modeling commonly use geostatistical techniques to account for hydraulic conductivity uncertainty within the aquifer. In the absence of well data allowing derivation of a relationship between geophysical and hydrological parameters, the use of geophysical data is often limited to constraining the structural boundaries. If we recover the base of an unconfined aquifer from an analysis of geophysical data, then the associated uncertainties are a consequence of the geophysical inversion process. In this study, we illustrate this by quantifying water table uncertainties for the unconfined aquifer formed by the paleochannel network around the Kintyre Uranium deposit in Western Australia. The focus of the Bayesian parametric bootstrap approach employed for the inversion of the available airborne electromagnetic data is the recovery of the base of the paleochannel network and the associated uncertainties. This allows us to then quantify the associated influences on the water table in a conceptualized groundwater usage scenario and compare the resulting uncertainties with uncertainties due to an uncertain hydraulic conductivity distribution within the aquifer. Our modeling shows that neither uncertainties in the depth to the base of the aquifer nor hydraulic conductivity uncertainties alone can capture the patterns of uncertainty in the water table that emerge when the two are combined. © 2017, National Ground Water Association.
Numerical uncertainty in computational engineering and physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M
2009-01-01
Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts ofmore » consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.« less
On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza
2015-12-01
In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.
NASA Astrophysics Data System (ADS)
Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.
2016-02-01
The effects of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The null-space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of predictive uncertainty (due to soil property (parametric) uncertainty) and the inter-annual climate variability due to year to year differences in CESM climate forcings. After calibrating to measured borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant predictive uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Inter-annual climate variability in projected soil moisture content and Stefan number are small. A volume- and time-integrated Stefan number decreases significantly, indicating a shift in subsurface energy utilization in the future climate (latent heat of phase change becomes more important than heat conduction). Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we quantify the relative magnitude of soil property uncertainty to another source of permafrost uncertainty, structural climate model uncertainty. We show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
CHRONOBIOLOGY OF HIGH BLOOD PRESSURE
Cornélissen, G.; Halberg, F.; Bakken, E. E.; Wang, Z.; Tarquini, R.; Perfetto, F.; Laffi, G.; Maggioni, C.; Kumagai, Y.; Homolka, P.; Havelková, A.; Dušek, J.; Svačinová, H.; Siegelová, J.; Fišer, B.
2008-01-01
BIOCOS, the project aimed at studying BIOlogical systems in their COSmos, has obtained a great deal of expertise in the fields of blood pressure (BP) and heart rate (HR) monitoring and of marker rhythmometry for the purposes of screening, diagnosis, treatment, and prognosis. Prolonging the monitoring reduces the uncertainty in the estimation of circadian parameters; the current recommendation of BIOCOS requires monitoring for at least 7 days. The BIOCOS approach consists of a parametric and a non-parametric analysis of the data, in which the results from the individual subject are being compared with gender- and age-specified reference values in health. Chronobiological designs can offer important new information regarding the optimization of treatment by timing its administration as a function of circadian and other rhythms. New technological developments are needed to close the loop between the monitoring of blood pressure and the administration of antihypertensive drugs. PMID:19122770
NASA Astrophysics Data System (ADS)
Etemadi, Halimeh; Samadi, S. Zahra; Sharifikia, Mohammad; Smoak, Joseph M.
2016-10-01
Mangrove wetlands exist in the transition zone between terrestrial and marine environments and have remarkable ecological and socio-economic value. This study uses climate change downscaling to address the question of non-stationarity influences on mangrove variations (expansion and contraction) within an arid coastal region. Our two-step approach includes downscaling models and uncertainty assessment, followed by a non-stationary and trend procedure using the Extreme Value Analysis (extRemes code). The Long Ashton Research Station Weather Generator (LARS-WG) model along with two different general circulation model (GCMs) (MIRH and HadCM3) were used to downscale climatic variables during current (1968-2011) and future (2011-2030, 2045-2065, and 2080-2099) periods. Parametric and non-parametric bootstrapping uncertainty tests demonstrated that the LARS-WGS model skillfully downscaled climatic variables at the 95 % significance level. Downscaling results using MIHR model show that minimum and maximum temperatures will increase in the future (2011-2030, 2045-2065, and 2080-2099) during winter and summer in a range of +4.21 and +4.7 °C, and +3.62 and +3.55 °C, respectively. HadCM3 analysis also revealed an increase in minimum (˜+3.03 °C) and maximum (˜+3.3 °C) temperatures during wet and dry seasons. In addition, we examined how much mangrove area has changed during the past decades and, thus, if climate change non-stationarity impacts mangrove ecosystems. Our results using remote sensing techniques and the non-parametric Mann-Whitney two-sample test indicated a sharp decline in mangrove area during 1972,1987, and 1997 periods ( p value = 0.002). Non-stationary assessment using the generalized extreme value (GEV) distributions by including mangrove area as a covariate further indicated that the null hypothesis of the stationary climate (no trend) should be rejected due to the very low p values for precipitation ( p value = 0.0027), minimum ( p value = 0.000000029) and maximum ( p value = 0.00016) temperatures. Based on non-stationary analysis and an upward trend in downscaled temperature extremes, climate change may control mangrove development in the future.
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
NASA Astrophysics Data System (ADS)
Ezzedine, S. M.; Pitarka, A.; Vorobiev, O.; Glenn, L.; Antoun, T.
2017-12-01
We have performed three-dimensional high resolution simulations of underground chemical explosions conducted recently in jointed rock outcrop as part of the Source Physics Experiments (SPE) being conducted at the Nevada National Security Site (NNSS). The main goal of the current study is to investigate the effects of the structural and geomechanical properties on the spall phenomena due to underground chemical explosions and its subsequent effect on the seismo-acoustic signature at far distances. Two parametric studies have been undertaken to assess the impact of different 1) conceptual geological models including a single layer and two layers model, with and without joints and with and without varying geomechanical properties, and 2) depth of bursts of the chemical explosions and explosion yields. Through these investigations we have explored not only the near-field response of the chemical explosions but also the far-field responses of the seismic and the acoustic signatures. The near-field simulations were conducted using the Eulerian and Lagrangian codes, GEODYN and GEODYN -L, respectively, while the far-field seismic simulations were conducted using the elastic wave propagation code, WPP, and the acoustic response using the Kirchhoff-Helmholtz-Rayleigh time-dependent approximation code, KHR. Though a series of simulations we have recorded the velocity field histories a) at the ground surface on an acoustic-source-patch for the acoustic simulations, and 2) on a seismic-source-box for the seismic simulations. We first analyzed the SPE3 experimental data and simulated results, then simulated SPE4-prime, SPE5, and SPE6 to anticipate their seismo-acoustic responses given conditions of uncertainties. SPE experiments were conducted in a granitic formation; we have extended the parametric study to include other geological settings such dolomite and alluvial formations. These parametric studies enabled us 1) investigating the geotechnical and geophysical key parameters that impact the seismo-acoustic responses of underground chemical explosions and 2) deciphering and ranking through a global sensitivity analysis the most important key parameters to be characterized on site to minimize uncertainties in prediction and discrimination.
Cost-effective conservation of an endangered frog under uncertainty.
Rose, Lucy E; Heard, Geoffrey W; Chee, Yung En; Wintle, Brendan A
2016-04-01
How should managers choose among conservation options when resources are scarce and there is uncertainty regarding the effectiveness of actions? Well-developed tools exist for prioritizing areas for one-time and binary actions (e.g., protect vs. not protect), but methods for prioritizing incremental or ongoing actions (such as habitat creation and maintenance) remain uncommon. We devised an approach that combines metapopulation viability and cost-effectiveness analyses to select among alternative conservation actions while accounting for uncertainty. In our study, cost-effectiveness is the ratio between the benefit of an action and its economic cost, where benefit is the change in metapopulation viability. We applied the approach to the case of the endangered growling grass frog (Litoria raniformis), which is threatened by urban development. We extended a Bayesian model to predict metapopulation viability under 9 urbanization and management scenarios and incorporated the full probability distribution of possible outcomes for each scenario into the cost-effectiveness analysis. This allowed us to discern between cost-effective alternatives that were robust to uncertainty and those with a relatively high risk of failure. We found a relatively high risk of extinction following urbanization if the only action was reservation of core habitat; habitat creation actions performed better than enhancement actions; and cost-effectiveness ranking changed depending on the consideration of uncertainty. Our results suggest that creation and maintenance of wetlands dedicated to L. raniformis is the only cost-effective action likely to result in a sufficiently low risk of extinction. To our knowledge we are the first study to use Bayesian metapopulation viability analysis to explicitly incorporate parametric and demographic uncertainty into a cost-effective evaluation of conservation actions. The approach offers guidance to decision makers aiming to achieve cost-effective conservation under uncertainty. © 2015 Society for Conservation Biology.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Sun, Liang; Huo, Wei; Jiao, Zongxia
2017-03-01
This paper studies relative pose control for a rigid spacecraft with parametric uncertainties approaching to an unknown tumbling target in disturbed space environment. State feedback controllers for relative translation and relative rotation are designed in an adaptive nonlinear robust control framework. The element-wise and norm-wise adaptive laws are utilized to compensate the parametric uncertainties of chaser and target spacecraft, respectively. External disturbances acting on two spacecraft are treated as a lumped and bounded perturbation input for system. To achieve the prescribed disturbance attenuation performance index, feedback gains of controllers are designed by solving linear matrix inequality problems so that lumped disturbance attenuation with respect to the controlled output is ensured in the L 2 -gain sense. Moreover, in the absence of lumped disturbance input, asymptotical convergence of relative pose are proved by using the Lyapunov method. Numerical simulations are performed to show that position tracking and attitude synchronization are accomplished in spite of the presence of couplings and uncertainties. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Bulsei, Julie; Darlington, Meryl; Durand-Zaleski, Isabelle; Azizi, Michel
2018-04-01
Whilst much uncertainty exists as to the efficacy of renal denervation (RDN), the positive results of the DENERHTN study in France confirmed the interest of an economic evaluation in order to assess efficiency of RDN and inform local decision makers about the costs and benefits of this intervention. The uncertainty surrounding both the outcomes and the costs can be described using health economic methods such as the non-parametric bootstrap. Internationally, numerous health economic studies using a cost-effectiveness model to assess the impact of RDN in terms of cost and effectiveness compared to antihypertensive medical treatment have been conducted. The DENERHTN cost-effectiveness study was the first health economic evaluation specifically designed to assess the cost-effectiveness of RDN using individual data. Using the DENERHTN results as an example, we provide here a summary of the principle methods used to perform a cost-effectiveness analysis.
Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas
Bedford, Tim; Daneshkhah, Alireza
2015-01-01
Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
On the apparent insignificance of the randomness of flexible joints on large space truss dynamics
NASA Technical Reports Server (NTRS)
Koch, R. M.; Klosner, J. M.
1993-01-01
Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Hypersonic vehicle model and control law development using H(infinity) and micron synthesis
NASA Astrophysics Data System (ADS)
Gregory, Irene M.; Chowdhry, Rajiv S.; McMinn, John D.; Shaughnessy, John D.
1994-10-01
The control system design for a Single Stage To Orbit (SSTO) air breathing vehicle will be central to a successful mission because a precise ascent trajectory will preserve narrow payload margins. The air breathing propulsion system requires the vehicle to fly roughly halfway around the Earth through atmospheric turbulence. The turbulence, the high sensitivity of the propulsion system to inlet flow conditions, the relatively large uncertainty of the parameters characterizing the vehicle, and continuous acceleration make the problem especially challenging. Adequate stability margins must be provided without sacrificing payload mass since payload margins are critical. Therefore, a multivariable control theory capable of explicitly including both uncertainty and performance is needed. The H(infinity) controller in general provides good robustness but can result in conservative solutions for practical problems involving structured uncertainty. Structured singular value mu framework for analysis and synthesis is potentially much less conservative and hence more appropriate for problems with tight margins. An SSTO control system requires: highly accurate tracking of velocity and altitude commands while limiting angle-of-attack oscillations, minimized control power usage, and a stabilized vehicle when atmospheric turbulence and system uncertainty are present. The controller designs using H(infinity) and mu-synthesis procedures were compared. An integrated flight/propulsion dynamic mathematical model of a conical accelerator vehicle was linearized as the vehicle accelerated through Mach 8. Vehicle acceleration through the selected flight condition gives rise to parametric variation that was modeled as a structured uncertainty. The mu-analysis approach was used in the frequency domain to conduct controller analysis and was confirmed by time history plots. Results demonstrate the inherent advantages of the mu framework for this class of problems.
Hypersonic vehicle model and control law development using H(infinity) and micron synthesis
NASA Technical Reports Server (NTRS)
Gregory, Irene M.; Chowdhry, Rajiv S.; Mcminn, John D.; Shaughnessy, John D.
1994-01-01
The control system design for a Single Stage To Orbit (SSTO) air breathing vehicle will be central to a successful mission because a precise ascent trajectory will preserve narrow payload margins. The air breathing propulsion system requires the vehicle to fly roughly halfway around the Earth through atmospheric turbulence. The turbulence, the high sensitivity of the propulsion system to inlet flow conditions, the relatively large uncertainty of the parameters characterizing the vehicle, and continuous acceleration make the problem especially challenging. Adequate stability margins must be provided without sacrificing payload mass since payload margins are critical. Therefore, a multivariable control theory capable of explicitly including both uncertainty and performance is needed. The H(infinity) controller in general provides good robustness but can result in conservative solutions for practical problems involving structured uncertainty. Structured singular value mu framework for analysis and synthesis is potentially much less conservative and hence more appropriate for problems with tight margins. An SSTO control system requires: highly accurate tracking of velocity and altitude commands while limiting angle-of-attack oscillations, minimized control power usage, and a stabilized vehicle when atmospheric turbulence and system uncertainty are present. The controller designs using H(infinity) and mu-synthesis procedures were compared. An integrated flight/propulsion dynamic mathematical model of a conical accelerator vehicle was linearized as the vehicle accelerated through Mach 8. Vehicle acceleration through the selected flight condition gives rise to parametric variation that was modeled as a structured uncertainty. The mu-analysis approach was used in the frequency domain to conduct controller analysis and was confirmed by time history plots. Results demonstrate the inherent advantages of the mu framework for this class of problems.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, G., E-mail: geert.verdoolaege@ugent.be; Laboratory for Plasma Physics, Royal Military Academy, B-1000 Brussels; Shabbir, A.
Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standardmore » least squares.« less
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Frustration of resonant preheating by exotic kinetic terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahmati, Shohreh; Seahra, Sanjeev S., E-mail: srahmati@unb.ca, E-mail: sseahra@unb.ca
2014-10-01
We study the effects of exotic kinetic terms on parametric resonance during the preheating epoch of the early universe. Specifically, we consider modifications to the action of ordinary matter fields motivated by generalized uncertainty principles, polymer quantization, as well as Dirac-Born-Infeld and k-essence models. To leading order in an ''exotic physics'' scale, the equations of motion derived from each of these models have the same algebraic form involving a nonlinear self-interaction in the matter sector. Neglecting spatial dependence, we show that the nonlinearity effectively shuts down the parametric resonance after a finite time period. We find numeric evidence that themore » frustration of parametric resonance persists to spatially inhomogenous matter in (1+1)-dimensions.« less
DNA binding sites characterization by means of Rényi entropy measures on nucleotide transitions.
Perera, Alexandre; Vallverdu, Montserrat; Claria, Francesc; Soria, José Manuel; Caminal, Pere
2006-01-01
In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measure such as Renyi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency based Renyi measures. Results are reported in this manuscript comparing transition frequencies (i.e. dinucelotides) and base frequencies for Shannon and parametric Renyi for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that, for the evaluated datasets, the information provided by both approaches is not redundant, as they evolve differently under increasing Renyi orders.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
A brief overview on radon measurements in drinking water.
Jobbágy, Viktor; Altzitzoglou, Timotheos; Malo, Petya; Tanner, Vesa; Hult, Mikael
2017-07-01
The aim of this paper is to present information about currently used standard and routine methods for radon analysis in drinking waters. An overview is given about the current situation and the performance of different measurement methods based on literature data. The following parameters are compared and discussed: initial sample volume and sample preparation, detection systems, minimum detectable activity, counting efficiency, interferences, measurement uncertainty, sample capacity and overall turnaround time. Moreover, the parametric levels for radon in drinking water from the different legislations and directives/guidelines on radon are presented. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ketchum, E.
1988-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) will be responsible for performing ground attitude determination for Gamma Ray Observatory (GRO) support. The study reported in this paper provides the FDD and the GRO project with ground attitude determination error information and illustrates several uses of the Generalized Calibration System (GCS). GCS, an institutional software tool in the FDD, automates the computation of the expected attitude determination uncertainty that a spacecraft will encounter during its mission. The GRO project is particularly interested in the uncertainty in the attitude determination using Sun sensors and a magnetometer when both star trackers are inoperable. In order to examine the expected attitude errors for GRO, a systematic approach was developed including various parametric studies. The approach identifies pertinent parameters and combines them to form a matrix of test runs in GCS. This matrix formed the basis for this study.
Scenario based optimization of a container vessel with respect to its projected operating conditions
NASA Astrophysics Data System (ADS)
Wagner, Jonas; Binkowski, Eva; Bronsart, Robert
2014-06-01
In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Impact of the HERA I+II combined data on the CT14 QCD global analysis
NASA Astrophysics Data System (ADS)
Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.
2016-11-01
A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Kasnakoğlu, Coşku
2016-01-01
Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds.
Kasnakoğlu, Coşku
2016-01-01
Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds. PMID:27783706
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Prakash, OM, II
1991-01-01
Three linear controllers are desiged to regulate the end effector of the Space Shuttle Remote Manipulator System (SRMS) operating in Position Hold Mode. In this mode of operation, jet firings of the Orbiter can be treated as disturbances while the controller tries to keep the end effector stationary in an orbiter-fixed reference frame. The three design techniques used include: the Linear Quadratic Regulator (LQR), H2 optimization, and H-infinity optimization. The nonlinear SRMS is linearized by modelling the effects of the significant nonlinearities as uncertain parameters. Each regulator design is evaluated for robust stability in light of the parametric uncertanties using both the small gain theorem with an H-infinity norm and the less conservative micro-analysis test. All three regulator designs offer significant improvement over the current system on the nominal plant. Unfortunately, even after dropping performance requirements and designing exclusively for robust stability, robust stability cannot be achieved. The SRMS suffers from lightly damped poles with real parametric uncertainties. Such a system renders the micro-analysis test, which allows for complex peturbations, too conservative.
Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.
Sadjadi, Seyed Jafar
2017-01-01
In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900
Absolute calibration of a charge-coupled device camera with twin beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meda, A.; Ruo-Berchera, I., E-mail: i.ruoberchera@inrim.it; Degiovanni, I. P.
2014-09-08
We report on the absolute calibration of a Charge-Coupled Device (CCD) camera by exploiting quantum correlation. This method exploits a certain number of spatial pairwise quantum correlated modes produced by spontaneous parametric-down-conversion. We develop a measurement model accounting for all the uncertainty contributions, and we reach the relative uncertainty of 0.3% in low photon flux regime. This represents a significant step forward for the characterization of (scientific) CCDs used in mesoscopic light regime.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain
2015-04-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.
Robust Control for Microgravity Vibration Isolation using Fixed Order, Mixed H2/Mu Design
NASA Technical Reports Server (NTRS)
Whorton, Mark
2003-01-01
Many space-science experiments need an active isolation system to provide a sufficiently quiescent microgravity environment. Modern control methods provide the potential for both high-performance and robust stability in the presence of parametric uncertainties that are characteristic of microgravity vibration isolation systems. While H2 and H(infinity) methods are well established, neither provides the levels of attenuation performance and robust stability in a compensator with low order. Mixed H2/H(infinity), controllers provide a means for maximizing robust stability for a given level of mean-square nominal performance while directly optimizing for controller order constraints. This paper demonstrates the benefit of mixed norm design from the perspective of robustness to parametric uncertainties and controller order for microgravity vibration isolation. A nominal performance metric analogous to the mu measure, for robust stability assessment is also introduced in order to define an acceptable trade space from which different control methodologies can be compared.
The hadronic corrections to muonic hydrogen Lamb shift from ChPT and the proton radius
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peset, Clara
2016-01-22
We obtain a model independent expression for the muonic hydrogen Lamb shift. The leading hadronic effects are controlled by the chiral theory, which allows for their model independent determination. We give their complete expression including the pion and Delta particles. Out of this analysis and the experimental measurement of the muonic hydrogen Lamb shift we determine the electromagnetic proton radius: r{sub p} = 0.8412(15) fm. This number is at 6.8σ variance with respect to the CODATA value. The parametric control of the uncertainties allows us to obtain a model independent determination of the error, which is dominated by hadronic effects.
Le, Quang A; Bae, Yuna H; Kang, Jenny H
2016-10-01
The EMILIA trial demonstrated that trastuzumab emtansine (T-DM1) significantly increased the median profession-free and overall survival relative to combination therapy with lapatinib plus capecitabine (LC) in patients with HER2-positive advanced breast cancer (ABC) previously treated with trastuzumab and a taxane. We performed an economic analysis of T-DM1 as a second-line therapy compared to LC and monotherapy with capecitabine (C) from both perspectives of the US payer and society. We developed four possible Markov models for ABC to compare the projected life-time costs and outcomes of T-DM1, LC, and C. Model transition probabilities were estimated from the EMILIA and EGF100151 clinical trials. Direct costs of the therapies, major adverse events, laboratory tests, and disease progression, indirect costs (productivity losses due to morbidity and mortality), and health utilities were obtained from published sources. The models used 3 % discount rate and reported in 2015 US dollars. Probabilistic sensitivity analysis and model averaging were used to account for model parametric and structural uncertainty. When incorporating both model parametric and structural uncertainty, the resulting incremental cost-effectiveness ratios (ICER) comparing T-DM1 to LC and T-DM1 to C were $183,828 per quality-adjusted life year (QALY) and $126,001/QALY from the societal perspective, respectively. From the payer's perspective, the ICERs were $220,385/QALY (T-DM1 vs. LC) and $168,355/QALY (T-DM1 vs. C). From both perspectives of the US payer and society, T-DM1 is not cost-effective when comparing to the LC combination therapy at a willingness-to-pay threshold of $150,000/QALY. T-DM1 might have a better chance to be cost-effective compared to capecitabine monotherapy from the US societal perspective.
Exploring Land Use and Land Cover Change and Feedbacks in the Global Change Assessment Model
NASA Astrophysics Data System (ADS)
Chen, M.; Vernon, C. R.; Huang, M.; Calvin, K. V.; Le Page, Y.; Kraucunas, I.
2017-12-01
Land Use and Land Cover Change (LULCC) is a major driver of global and regional environmental change. Projections of land use change are thus an essential component in Integrated Assessment Models (IAMs) to study feedbacks between transformation of energy systems and land productivity under the context of climate change. However, the spatial scale of IAMs, e.g., the Global Change Assessment Model (GCAM), is typically larger than the scale of terrestrial processes in the human-Earth system, LULCC downscaling therefore becomes a critical linkage among these multi-scale and multi-sector processes. Parametric uncertainties in LULCC downscaling algorithms, however, have been under explored, especially in the context of how such uncertainties could propagate to affect energy systems in a changing climate. In this study, we use a LULCC downscaling model, Demeter, to downscale GCAM-based future land use scenarios into fine spatial scales, and explore the sensitivity of downscaled land allocations to key parameters. Land productivity estimates (e.g., biomass production and crop yield) based on the downscaled LULCC scenarios are then fed to GCAM to evaluate how energy systems might change due to altered water and carbon cycle dynamics and their interactions with the human system, , which would in turn affect future land use projections. We demonstrate that uncertainties in LULCC downscaling can result in significant differences in simulated scenarios, indicating the importance of quantifying parametric uncertainties in LULCC downscaling models for integrated assessment studies.
DNA binding site characterization by means of Rényi entropy measures on nucleotide transitions.
Perera, A; Vallverdu, M; Claria, F; Soria, J M; Caminal, P
2008-06-01
In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measures such as Rényi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency-based Rényi measures. Results are reported in this work comparing transition frequencies (i.e., dinucleotides) and base frequencies for Shannon and parametric Rényi entropies for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that the information provided by both approaches is not redundant. Furthermore, under the presence of noise in the binding site matrix we observe overall improved robustness of nucleotide transition-based algorithms when compared with nucleotide frequency-based method.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Selection of climate policies under the uncertainties in the Fifth Assessment Report of the IPCC
NASA Astrophysics Data System (ADS)
Drouet, L.; Bosetti, V.; Tavoni, M.
2015-10-01
Strategies for dealing with climate change must incorporate and quantify all the relevant uncertainties, and be designed to manage the resulting risks. Here we employ the best available knowledge so far, summarized by the three working groups of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5; refs , , ), to quantify the uncertainty of mitigation costs, climate change dynamics, and economic damage for alternative carbon budgets. We rank climate policies according to different decision-making criteria concerning uncertainty, risk aversion and intertemporal preferences. Our findings show that preferences over uncertainties are as important as the choice of the widely discussed time discount factor. Climate policies consistent with limiting warming to 2 °C above preindustrial levels are compatible with a subset of decision-making criteria and some model parametrizations, but not with the commonly adopted expected utility framework.
Design Analysis Kit for Optimization and Terascale Applications 6.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-19
Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to: (1) enhance understanding of risk, (2) improve products, and (3) assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a computational model. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verificationmore » and validation activities. The algorithms implemented in Dakota aim to address challenges in performing these analyses with complex science and engineering models from desktop to high performance computers.« less
Verification of forecast ensembles in complex terrain including observation uncertainty
NASA Astrophysics Data System (ADS)
Dorninger, Manfred; Kloiber, Simon
2017-04-01
Traditionally, verification means to verify a forecast (ensemble) with the truth represented by observations. The observation errors are quite often neglected arguing that they are small when compared to the forecast error. In this study as part of the MesoVICT (Mesoscale Verification Inter-comparison over Complex Terrain) project it will be shown, that observation errors have to be taken into account for verification purposes. The observation uncertainty is estimated from the VERA (Vienna Enhanced Resolution Analysis) and represented via two analysis ensembles which are compared to the forecast ensemble. For the whole study results from COSMO-LEPS provided by Arpae-SIMC Emilia-Romagna are used as forecast ensemble. The time period covers the MesoVICT core case from 20-22 June 2007. In a first step, all ensembles are investigated concerning their distribution. Several tests have been executed (Kolmogorov-Smirnov-Test, Finkelstein-Schafer Test, Chi-Square Test etc.) showing no exact mathematical distribution. So the main focus is on non-parametric statistics (e.g. Kernel density estimation, Boxplots etc.) and also the deviation between "forced" normal distributed data and the kernel density estimations. In a next step the observational deviations due to the analysis ensembles are analysed. In a first approach scores are multiple times calculated with every single ensemble member from the analysis ensemble regarded as "true" observation. The results are presented as boxplots for the different scores and parameters. Additionally, the bootstrapping method is also applied to the ensembles. These possible approaches to incorporating observational uncertainty into the computation of statistics will be discussed in the talk.
Estimating trends in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.
2017-06-01
Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.
Park, Gwansik; Forman, Jason; Kim, Taewung; Panzer, Matthew B; Crandall, Jeff R
2018-02-28
The goal of this study was to explore a framework for developing injury risk functions (IRFs) in a bottom-up approach based on responses of parametrically variable finite element (FE) models representing exemplar populations. First, a parametric femur modeling tool was developed and validated using a subject-specific (SS)-FE modeling approach. Second, principal component analysis and regression were used to identify parametric geometric descriptors of the human femur and the distribution of those factors for 3 target occupant sizes (5th, 50th, and 95th percentile males). Third, distributions of material parameters of cortical bone were obtained from the literature for 3 target occupant ages (25, 50, and 75 years) using regression analysis. A Monte Carlo method was then implemented to generate populations of FE models of the femur for target occupants, using a parametric femur modeling tool. Simulations were conducted with each of these models under 3-point dynamic bending. Finally, model-based IRFs were developed using logistic regression analysis, based on the moment at fracture observed in the FE simulation. In total, 100 femur FE models incorporating the variation in the population of interest were generated, and 500,000 moments at fracture were observed (applying 5,000 ultimate strains for each synthesized 100 femur FE models) for each target occupant characteristics. Using the proposed framework on this study, the model-based IRFs for 3 target male occupant sizes (5th, 50th, and 95th percentiles) and ages (25, 50, and 75 years) were developed. The model-based IRF was located in the 95% confidence interval of the test-based IRF for the range of 15 to 70% injury risks. The 95% confidence interval of the developed IRF was almost in line with the mean curve due to a large number of data points. The framework proposed in this study would be beneficial for developing the IRFs in a bottom-up manner, whose range of variabilities is informed by the population-based FE model responses. Specifically, this method mitigates the uncertainties in applying empirical scaling and may improve IRF fidelity when a limited number of experimental specimens are available.
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
NASA Astrophysics Data System (ADS)
Wang, Jin; Xu, Fan; Lu, GuoDong
2017-09-01
More complex problems of simultaneous position and internal force control occur with cooperative manipulator systems than that of a single one. In the presence of unwanted parametric and modelling uncertainties as well as external disturbances, a decentralised position synchronised force control scheme is proposed. With a feedforward neural network estimating engine, a precise model of the system dynamics is not required. Unlike conventional cooperative or synchronised controllers, virtual position and virtual synchronisation errors are introduced for internal force tracking control and task space position synchronisation. Meanwhile joint space synchronisation and force measurement are unnecessary. Together with simulation studies and analysis, the position and the internal force errors are shown to asymptotically converge to zero. Moreover, the controller exhibits different characteristics with selected synchronisation factors. Under certain settings, it can deal with temporary cooperation by an intelligent retreat mechanism, where less internal force would occur and rigid collision can be avoided. Using a Lyapunov stability approach, the controller is proven to be robust in face of the aforementioned uncertainties.
Bayesian analysis of stage-fall-discharge rating curves and their uncertainties
NASA Astrophysics Data System (ADS)
Mansanarez, Valentin; Le Coz, Jérôme; Renard, Benjamin; Lang, Michel; Pierrefeu, Gilles; Le Boursicaud, Raphaël; Pobanz, Karine
2016-04-01
Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. Building on existing Bayesian approaches, we introduce an original hydraulics-based method for developing SFD rating curves used at twin gauge stations and estimating their uncertainties. Conventional power functions for channel and section controls are used, and transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The difference between the reference levels at the two stations is estimated as another uncertain parameter of the SFD model. The method proposed in this presentation incorporates information from both the hydraulic knowledge (equations of channel or section controls) and the information available in the stage-fall-discharge observations (gauging data). The obtained total uncertainty combines the parametric uncertainty and the remnant uncertainty related to the model of rating curve. This method provides a direct estimation of the physical inputs of the rating curve (roughness, width, slope bed, distance between twin gauges, etc.). The performance of the new method is tested using an application case affected by the variable backwater of a run-of-the-river dam: the Rhône river at Valence, France. In particular, a sensitivity analysis to the prior information and to the gauging dataset is performed. At that site, the stage-fall-discharge domain is well documented with gaugings conducted over a range of backwater affected and unaffected conditions. The performance of the new model was deemed to be satisfactory. Notably, transition to uniform flow when the overall range of the auxiliary stage is gauged is correctly simulated. The resulting curves are in good agreement with the observations (gaugings) and their uncertainty envelopes are acceptable for computing streamflow records. Similar conclusions were drawn from the application to other similar sites.
Askin, Amanda Christine; Barter, Garrett; West, Todd H.; ...
2015-02-14
Here, we present a parametric analysis of factors that can influence advanced fuel and technology deployments in U.S. Class 7–8 trucks through 2050. The analysis focuses on the competition between traditional diesel trucks, natural gas vehicles (NGVs), and ultra-efficient powertrains. Underlying the study is a vehicle choice and stock model of the U.S. heavy-duty vehicle market. Moreover, the model is segmented by vehicle class, body type, powertrain, fleet size, and operational type. We find that conventional diesel trucks will dominate the market through 2050, but NGVs could have significant market penetration depending on key technological and economic uncertainties. Compressed naturalmore » gas trucks conducting urban trips in fleets that can support private infrastructure are economically viable now and will continue to gain market share. Ultra-efficient diesel trucks, exemplified by the U.S. Department of Energy's SuperTruck program, are the preferred alternative in the long haul segment, but could compete with liquefied natural gas (LNG) trucks if the fuel price differential between LNG and diesel increases. However, the greatest impact in reducing petroleum consumption and pollutant emissions is had by investing in efficiency technologies that benefit all powertrains, especially the conventional diesels that comprise the majority of the stock, instead of incentivizing specific alternatives.« less
Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid
2014-01-01
Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270
NASA Astrophysics Data System (ADS)
Van Steenbergen, N.; Willems, P.
2012-04-01
Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the probability of flooding of a certain area, based on the uncertainty assessment of the flood forecasts. By using this type of maps, water managers can focus their attention on the areas with the highest flood probability. Also the larger public can consult these maps for information on the probability of flooding for their specific location, such that they can take pro-active measures to reduce the personal damage. The method of quantifying the uncertainty was implemented in the operational flood forecasting system for the navigable rivers in the Flanders region of Belgium. The method has shown clear benefits during the floods of the last two years.
NASA Astrophysics Data System (ADS)
Lahiri, B. B.; Ranoo, Surojit; Philip, John
2017-11-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the results are compared using a lumped system thermal model. The various uncertainties involved in SAR estimation are categorized as material uncertainties, thermodynamic uncertainties and parametric uncertainties. The adiabatic reconstruction is found to decrease the uncertainties in SAR measurement by approximately three times. Additionally, a set of experimental guidelines for accurate SAR estimation using adiabatic reconstruction protocol is also recommended. These results warrant a universal experimental and data analysis protocol for SAR measurements during field induced heating of magnetic fluids under non-adiabatic conditions.
Comparison of radiation parametrizations within the HARMONIE-AROME NWP model
NASA Astrophysics Data System (ADS)
Rontu, Laura; Lindfors, Anders V.
2018-05-01
Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.
NASA Astrophysics Data System (ADS)
Shoeibi, Samira; Taghavi-Shahri, F.; Khanpour, Hamzeh; Javidan, Kurosh
2018-04-01
In recent years, several experiments at the e-p collider HERA have collected high precision deep-inelastic scattering (DIS) data on the spectrum of leading nucleon carrying a large fraction of the proton's energy. In this paper, we have analyzed recent experimental data on the production of forward protons and neutrons in DIS at HERA in the framework of a perturbative QCD. We propose a technique based on the fractures functions framework, and extract the nucleon fracture functions (FFs) M2(n /p )(x ,Q2;xL) from global QCD analysis of DIS data measured by the ZEUS Collaboration at HERA. We have shown that an approach based on the fracture functions formalism allows us to phenomenologically parametrize the nucleon FFs. Considering both leading neutron as well as leading proton production data at HERA, we present the results for the separate parton distributions for all parton species, including valence quark densities, the antiquark densities, the strange sea distribution, and the gluon distribution functions. We proposed several parametrizations for the nucleon FFs and open the possibility of these asymmetries. The obtained optimum set of nucleon FFs is accompanied by Hessian uncertainty sets which allow one to propagate uncertainties to other observables interest. The extracted results for the t -integrated leading neutron F2LN (3 )(x ,Q2;xL) and leading proton F2LP (3 )(x ,Q2;xL) structure functions are in good agreement with all data analyzed, for a wide range of fractional momentum variable x as well as the longitudinal momentum fraction xL.
Assessment of SFR Wire Wrap Simulation Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David
Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advancedmore » Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results for the 3-D pipe, the single pin THORS mesh, and the 7-pin bundle mesh, respectively.« less
Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images
NASA Astrophysics Data System (ADS)
Ely, G.; Malcolm, A. E.; Poliannikov, O. V.
2017-12-01
Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.
On the synergy of nuclear data for fusion and model assumptions
NASA Astrophysics Data System (ADS)
Avrigeanu, Vlad; Avrigeanu, Marilena
2017-09-01
A deuteron breakup (BU) parametrization is involved within the BU analysis of recently measured reaction-in-flight (RIF) neutron time-of-flight spectrum, while open questions underlined previously on related fast-neutron induced reaction on Zr isotopes are also addressed in a consistent way, at once with the use of a recent optical potential for α-particles to understand the large discrepancy between the measured and calculated cross sections of the 94Zr(n,α)91Sr reaction. Thus the synergy between the above-mentioned three distinct subjects may finally lead to smaller uncertainties of the nuclear data for fusion while the RIF neutron spectra may also be used to support nuclear model assumptions.
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal second order sliding mode control for nonlinear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-07-01
In this paper, a chattering free optimal second order sliding mode control (OSOSMC) method is proposed to stabilize nonlinear systems affected by uncertainties. The nonlinear optimal control strategy is based on the control Lyapunov function (CLF). For ensuring robustness of the optimal controller in the presence of parametric uncertainty and external disturbances, a sliding mode control scheme is realized by combining an integral and a terminal sliding surface. The resulting second order sliding mode can effectively reduce chattering in the control input. Simulation results confirm the supremacy of the proposed optimal second order sliding mode control over some existing sliding mode controllers in controlling nonlinear systems affected by uncertainty. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Hartzell, S.
1989-01-01
The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author
Robust Control for The G-Limit Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
Many microgravity science experiments need an active isolation system to provide a sufficiently quiescent acceleration environment. The g-LIMIT vibration isolation system will provide isolation for Microgravity Science Glovebox experiments in the International Space Station. While standard control system technologies have been demonstrated for these applications, modern control methods have the potential for meeting performance requirements while providing robust stability in the presence of parametric uncertainties that are characteristic of microgravity vibration isolation systems. While H2 and H infinity methods are well established, neither provides the levels of attenuation performance and robust stability in a compensator with low order. Mixed H2/mu controllers provide a means for maximizing robust stability for a given level of mean-square nominal performance while directly optimizing for controller order constraints. This paper demonstrates the benefit of mixed norm design from the perspective of robustness to parametric uncertainties and controller order for microgravity vibration isolation. A nominal performance metric analogous to the mu measure for robust stability assessment is also introduced in order to define an acceptable trade space from which different control methodologies can be compared.
Made-to-measure modelling of observed galaxy dynamics
NASA Astrophysics Data System (ADS)
Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.
2018-01-01
Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
Robust simulation of buckled structures using reduced order modeling
NASA Astrophysics Data System (ADS)
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows
NASA Astrophysics Data System (ADS)
Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs
2017-11-01
A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary
NASA Technical Reports Server (NTRS)
Stanford, Bret K.; Massey, Steven J.
2017-01-01
A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
Landing Gear Noise Prediction and Analysis for Tube-and-Wing and Hybrid-Wing-Body Aircraft
NASA Technical Reports Server (NTRS)
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
Improvements and extensions to landing gear noise prediction methods are developed. New features include installation effects such as reflection from the aircraft, gear truck angle effect, local flow calculation at the landing gear locations, gear size effect, and directivity for various gear designs. These new features have not only significantly improved the accuracy and robustness of the prediction tools, but also have enabled applications to unconventional aircraft designs and installations. Systematic validations of the improved prediction capability are then presented, including parametric validations in functional trends as well as validations in absolute amplitudes, covering a wide variety of landing gear designs, sizes, and testing conditions. The new method is then applied to selected concept aircraft configurations in the portfolio of the NASA Environmentally Responsible Aviation Project envisioned for the timeframe of 2025. The landing gear noise levels are on the order of 2 to 4 dB higher than previously reported predictions due to increased fidelity in accounting for installation effects and gear design details. With the new method, it is now possible to reveal and assess the unique noise characteristics of landing gear systems for each type of aircraft. To address the inevitable uncertainties in predictions of landing gear noise models for future aircraft, an uncertainty analysis is given, using the method of Monte Carlo simulation. The standard deviation of the uncertainty in predicting the absolute level of landing gear noise is quantified and determined to be 1.4 EPNL dB.
An adaptive robust controller for time delay maglev transportation systems
NASA Astrophysics Data System (ADS)
Milani, Reza Hamidi; Zarabadipour, Hassan; Shahnazi, Reza
2012-12-01
For engineering systems, uncertainties and time delays are two important issues that must be considered in control design. Uncertainties are often encountered in various dynamical systems due to modeling errors, measurement noises, linearization and approximations. Time delays have always been among the most difficult problems encountered in process control. In practical applications of feedback control, time delay arises frequently and can severely degrade closed-loop system performance and in some cases, drives the system to instability. Therefore, stability analysis and controller synthesis for uncertain nonlinear time-delay systems are important both in theory and in practice and many analytical techniques have been developed using delay-dependent Lyapunov function. In the past decade the magnetic and levitation (maglev) transportation system as a new system with high functionality has been the focus of numerous studies. However, maglev transportation systems are highly nonlinear and thus designing controller for those are challenging. The main topic of this paper is to design an adaptive robust controller for maglev transportation systems with time-delay, parametric uncertainties and external disturbances. In this paper, an adaptive robust control (ARC) is designed for this purpose. It should be noted that the adaptive gain is derived from Lyapunov-Krasovskii synthesis method, therefore asymptotic stability is guaranteed.
Klier, Christine
2012-03-06
The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.
Critical discussion on the "observed" water balances of five sub-basins in the Everest region
NASA Astrophysics Data System (ADS)
Chevallier, P.; Eeckman, J.; Nepal, S.; Delclaux, F.; Wagnon, P.; Brun, F.; Koirala, D.
2017-12-01
The hydrometeorological components of five Dudh Koshi River sub-basins on the Nepalese side of the Mount Everest have been monitored during four hydrological years (2013-2017), with altitudes ranging from 2000 m to Everest top, areas between 4.65 and 1207 km², and proportions of glaciated areas between nil and 45%. This data set is completed with glacier mass balance observations. The analysis of the observed data and the resulting water balances show large uncertainties of different types: aleatory, epistemic or semantic, following the classification proposed by Beven (2016). The discussion is illustrated using results from two modeling approaches, physical (ISBA, Noilhan and Planton, 1996) and conceptual (J2000, Krause, 2001), as well as large scale glacier mass balances obtained by the way of a recent remote sensing processing method. References: Beven, K., 2016. Facets of uncertainty: epistemic uncertainty, non-stationarity, likelihood, hypothesis testing, and communication. Hydrological Sciences Journal 61, 1652-1665. doi:10.1080/02626667.2015.1031761 Krause, P., 2001. Das hydrologische Modellsystem J2000: Beschreibung und Anwendung in groen Flueinzugsgebieten, Schriften des Forschungszentrum Jülich. Reihe Umwelt/Environment; Band 29. Noilhan, J., Planton, S., 1989. A single parametrization of land surface processes for meteorological models. Monthly Weather Review 536-549.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Robust control of seismically excited cable stayed bridges with MR dampers
NASA Astrophysics Data System (ADS)
YeganehFallah, Arash; Khajeh Ahamd Attari, Nader
2017-03-01
In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.
Non-parametric determination of H and He interstellar fluxes from cosmic-ray data
NASA Astrophysics Data System (ADS)
Ghelfi, A.; Barao, F.; Derome, L.; Maurin, D.
2016-06-01
Context. Top-of-atmosphere (TOA) cosmic-ray (CR) fluxes from satellites and balloon-borne experiments are snapshots of the solar activity imprinted on the interstellar (IS) fluxes. Given a series of snapshots, the unknown IS flux shape and the level of modulation (for each snapshot) can be recovered. Aims: We wish (I) to provide the most accurate determination of the IS H and He fluxes from TOA data alone; (II) to obtain the associated modulation levels (and uncertainties) while fully accounting for the correlations with the IS flux uncertainties; and (III) to inspect whether the minimal force-field approximation is sufficient to explain all the data at hand. Methods: Using H and He TOA measurements, including the recent high-precision AMS, BESS-Polar, and PAMELA data, we performed a non-parametric fit of the IS fluxes JISH,~He and modulation level φI for each data-taking period. We relied on a Markov chain Monte Carlo (MCMC) engine to extract the probability density function and correlations (hence the credible intervals) of the sought parameters. Results: Although H and He are the most abundant and best measured CR species, several datasets had to be excluded from the analysis because of inconsistencies with other measurements. From the subset of data passing our consistency cut, we provide ready-to-use best-fit and credible intervals for the H and He IS fluxes from MeV/n to PeV/n energy (with a relative precision in the range [ 2-10% ] at 1σ). Given the strong correlation between JIS and φI parameters, the uncertainties on JIS translate into Δφ ≈ ± 30 MV (at 1σ) for all experiments. We also find that the presence of 3He in He data biases φ towards higher φ values by ~30 MV. The force-field approximation, despite its limitation, gives an excellent (χ2/d.o.f. = 1.02) description of the recent high-precision TOA H and He fluxes. Conclusions: The analysis must be extended to different charge species and more realistic modulation models. It would benefit from the AMS-02 unique capability of providing frequent high-precision snapshots of the TOA fluxes over a full solar cycle.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
Uncertainties in nuclear transition matrix elements for neutrinoless {beta}{beta} decay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rath, P. K.
Uncertainties in nuclear transition matrix elements M{sup (0{nu})} and M{sub N}{sup (0{nu})} due to the exchange of light and heavy Majorana neutrinos, respectively have been estimated by calculating sets of twelve nuclear transition matrix elements for the neutrinoless {beta}{beta} decay of {sup 94,96}Zr, {sup 98,100}Mo, {sup 104}Ru, {sup 110}Pd, {sup 128,130}Te and {sup 150}Nd isotopes in the case of 0{sup +}{yields}0{sup +} transition by considering four different parameterizations of a Hamiltonian with pairing plus multipolar effective two-body interaction and three different parameterizations of Jastrow short range correlations. Exclusion of nuclear transition matrix elements calculated with the Miller-Spencer parametrization reduces themore » uncertainties by 10%-15%.« less
Robust linear quadratic designs with respect to parameter uncertainty
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1992-01-01
The authors derive a linear quadratic regulator (LQR) which is robust to parametric uncertainty by using the overbounding method of I. R. Petersen and C. V. Hollot (1986). The resulting controller is determined from the solution of a single modified Riccati equation. It is shown that, when applied to a structural system, the controller gains add robustness by minimizing the potential energy of uncertain stiffness elements, and minimizing the rate of dissipation of energy through uncertain damping elements. A worst-case disturbance in the direction of the uncertainty is also considered. It is proved that performance robustness has been increased with the robust LQR when compared to a mismatched LQR design where the controller is designed on the nominal system, but applied to the actual uncertain system.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Huiping; Qian, Yun; Zhao, Chun
2015-09-09
In this study, we adopt a parametric sensitivity analysis framework that integrates the quasi-Monte Carlo parameter sampling approach and a surrogate model to examine aerosol effects on the East Asian Monsoon climate simulated in the Community Atmosphere Model (CAM5). A total number of 256 CAM5 simulations are conducted to quantify the model responses to the uncertain parameters associated with cloud microphysics parameterizations and aerosol (e.g., sulfate, black carbon (BC), and dust) emission factors and their interactions. Results show that the interaction terms among parameters are important for quantifying the sensitivity of fields of interest, especially precipitation, to the parameters. Themore » relative importance of cloud-microphysics parameters and emission factors (strength) depends on evaluation metrics or the model fields we focused on, and the presence of uncertainty in cloud microphysics imposes an additional challenge in quantifying the impact of aerosols on cloud and climate. Due to their different optical and microphysical properties and spatial distributions, sulfate, BC, and dust aerosols have very different impacts on East Asian Monsoon through aerosol-cloud-radiation interactions. The climatic effects of aerosol do not always have a monotonic response to the change of emission factors. The spatial patterns of both sign and magnitude of aerosol-induced changes in radiative fluxes, cloud, and precipitation could be different, depending on the aerosol types, when parameters are sampled in different ranges of values. We also identify the different cloud microphysical parameters that show the most significant impact on climatic effect induced by sulfate, BC and dust, respectively, in East Asia.« less
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Kyoung Mo; Jee, Kye Kwang; Pyo, Chang Ryul
The basis of the leak before break (LBB) concept is to demonstrate that piping will leak significantly before a double ended guillotine break (DEGB) occurs. This is demonstrated by quantifying and evaluating the leak process and prescribing safe shutdown of the plant on the basis of the monitored leak rate. The application of LBB for power plant design has reduced plant cost while improving plant integrity. Several evaluations employing LBB analysis on system piping based on DEGB design have been completed. However, the application of LBB on main steam (MS) piping, which is LBB applicable piping, has not been performedmore » due to several uncertainties associated with occurrence of steam hammer and dynamic strain aging (DSA). The objective of this paper is to demonstrate the applicability of the LBB design concept to main steam lines manufactured with SA106 Gr.C carbon steel. Based on the material properties, including fracture toughness and tensile properties obtained from the comprehensive material tests for base and weld metals, a parametric study was performed as described in this paper. The PICEP code was used to determine leak size crack (LSC) and the FLET code was used to perform the stability assessment of MS piping. The effects of material properties obtained from tests were evaluated to determine the LBB applicability for the MS piping. It can be shown from this parametric study that the MS piping has a high possibility of design using LBB analysis.« less
Emissions from ships in the northwestern United States.
Corbett, James J
2002-03-15
Recent inventory efforts have focused on developing nonroad inventories for emissions modeling and policy insights. Characterizing these inventories geographically and explicitly treating the uncertaintiesthat result from limited emissions testing, incomplete activity and usage data, and other important input parameters currently pose the largest methodological challenges. This paper presents a commercial marine vessel (CMV) emissions inventory for Washington and Oregon using detailed statistics regarding fuel consumption, vessel movements, and cargo volumes for the Columbia and Snake River systems. The inventory estimates emissions for oxides of nitrogen (NOx), particulate matter (PM), and oxides of sulfur (SOx). This analysis estimates that annual NOx emissions from marine transportation in the Columbia and Snake River systems in Washington and Oregon equal 6900 t of NOx (as NO2) per year, 2.6 times greater than previous NO, inventories for this region. Statewide CMV NO, emissions are estimated to be 9,800 t of NOx per year. By relying on a "bottom-up" fuel consumption model that includes vessel characteristics and transit information, the river system inventory may be more accurate than previous estimates. This inventory provides modelers with bounded parametric inputs for sensitivity analysis in pollution modeling. The ability to parametrically model the uncertainty in commercial marine vessel inventories also will help policy-makers determine whether better policy decisions can be enabled through further vessel testing and improved inventory resolution.
Error models for official mortality forecasts.
Alho, J M; Spencer, B D
1990-09-01
"The Office of the Actuary, U.S. Social Security Administration, produces alternative forecasts of mortality to reflect uncertainty about the future.... In this article we identify the components and assumptions of the official forecasts and approximate them by stochastic parametric models. We estimate parameters of the models from past data, derive statistical intervals for the forecasts, and compare them with the official high-low intervals. We use the models to evaluate the forecasts rather than to develop different predictions of the future. Analysis of data from 1972 to 1985 shows that the official intervals for mortality forecasts for males or females aged 45-70 have approximately a 95% chance of including the true mortality rate in any year. For other ages the chances are much less than 95%." excerpt
NASA Astrophysics Data System (ADS)
Shao, Xingling; Liu, Jun; Wang, Honglun
2018-05-01
In this paper, a robust back-stepping output feedback trajectory tracking controller is proposed for quadrotors subject to parametric uncertainties and external disturbances. Based on the hierarchical control principle, the quadrotor dynamics is decomposed into translational and rotational subsystems to facilitate the back-stepping control design. With given model information incorporated into observer design, a high-order extended state observer (ESO) that relies only on position measurements is developed to estimate the remaining unmeasurable states and the lumped disturbances in rotational subsystem simultaneously. To overcome the problem of "explosion of complexity" in the back-stepping design, the sigmoid tracking differentiator (STD) is introduced to compute the derivative of virtual control laws. The advantage is that the proposed controller via output-feedback scheme not only can ensure good tracking performance using very limited information of quadrotors, but also has the ability of handling the undesired uncertainties. The stability analysis is established using the Lyapunov theory. Simulation results demonstrate the effectiveness of the proposed control scheme in achieving a guaranteed tracking performance with respect to an 8-shaped reference trajectory.
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
Adaptive relative pose control of spacecraft with model couplings and uncertainties
NASA Astrophysics Data System (ADS)
Sun, Liang; Zheng, Zewei
2018-02-01
The spacecraft pose tracking control problem for an uncertain pursuer approaching to a space target is researched in this paper. After modeling the nonlinearly coupled dynamics for relative translational and rotational motions between two spacecraft, position tracking and attitude synchronization controllers are developed independently by using a robust adaptive control approach. The unknown kinematic couplings, parametric uncertainties, and bounded external disturbances are handled with adaptive updating laws. It is proved via Lyapunov method that the pose tracking errors converge to zero asymptotically. Spacecraft close-range rendezvous and proximity operations are introduced as an example to validate the effectiveness of the proposed control approach.
A probabilistic strategy for parametric catastrophe insurance
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Martina, Mario; Stephenson, David; Youngman, Benjamin
2017-04-01
Economic losses due to natural hazards have shown an upward trend since 1980, which is expected to continue. Recent years have seen a growing worldwide commitment towards the reduction of disaster losses. This requires effective management of disaster risk at all levels, a part of which involves reducing financial vulnerability to disasters ex-ante, ensuring that necessary resources will be available following such events. One way to achieve this is through risk transfer instruments. These can be based on different types of triggers, which determine the conditions under which payouts are made after an event. This study focuses on parametric triggers, where payouts are determined by the occurrence of an event exceeding specified physical parameters at a given location, or at multiple locations, or over a region. This type of product offers a number of important advantages, and its adoption is increasing. The main drawback of parametric triggers is their susceptibility to basis risk, which arises when there is a mismatch between triggered payouts and the occurrence of loss events. This is unavoidable in said programmes, as their calibration is based on models containing a number of different sources of uncertainty. Thus, a deterministic definition of the loss event triggering parameters appears flawed. However, often for simplicity, this is the way in which most parametric models tend to be developed. This study therefore presents an innovative probabilistic strategy for parametric catastrophe insurance. It is advantageous as it recognizes uncertainties and minimizes basis risk while maintaining a simple and transparent procedure. A logistic regression model is constructed here to represent the occurrence of loss events based on certain loss index variables, obtained through the transformation of input environmental variables. Flood-related losses due to rainfall are studied. The resulting model is able, for any given day, to issue probabilities of occurrence of loss events. Due to the nature of parametric programmes, it is still necessary to clearly define when a payout is due or not, and so a decision threshold probability above which a loss event is considered to occur must be set, effectively converting the issued probabilities into deterministic binary outcomes. Model skill and value are evaluated over the range of possible threshold probabilities, with the objective of defining the optimal one. The predictive ability of the model is assessed. In terms of value assessment, a decision model is proposed, allowing users to quantify monetarily their expected expenses when different combinations of model event triggering and actual event occurrence take place, directly tackling the problem of basis risk.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
A non-parametric consistency test of the ΛCDM model with Planck CMB data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
Model uncertainties of the 2002 update of California seismic hazard maps
Cao, T.; Petersen, M.D.; Frankel, A.D.
2005-01-01
In this article we present and explore the source and ground-motion model uncertainty and parametric sensitivity for the 2002 update of the California probabilistic seismic hazard maps. Our approach is to implement a Monte Carlo simulation that allows for independent sampling from fault to fault in each simulation. The source-distance dependent characteristics of the uncertainty maps of seismic hazard are explained by the fundamental uncertainty patterns from four basic test cases, in which the uncertainties from one-fault and two-fault systems are studied in detail. The California coefficient of variation (COV, ratio of the standard deviation to the mean) map for peak ground acceleration (10% of exceedance in 50 years) shows lower values (0.1-0.15) along the San Andreas fault system and other class A faults than along class B faults (0.2-0.3). High COV values (0.4-0.6) are found around the Garlock, Anacapa-Dume, and Palos Verdes faults in southern California and around the Maacama fault and Cascadia subduction zone in northern California.
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Final Report. Analysis and Reduction of Complex Networks Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef M.; Coles, T.; Spantini, A.
2013-09-30
The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less
A Cartesian parametrization for the numerical analysis of material instability
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.; ...
2016-02-25
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
A Cartesian parametrization for the numerical analysis of material instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mota, Alejandro; Chen, Qiushi; Foulk, III, James W.
We examine four parametrizations of the unit sphere in the context of material stability analysis by means of the singularity of the acoustic tensor. We then propose a Cartesian parametrization for vectors that lie a cube of side length two and use these vectors in lieu of unit normals to test for the loss of the ellipticity condition. This parametrization is then used to construct a tensor akin to the acoustic tensor. It is shown that both of these tensors become singular at the same time and in the same planes in the presence of a material instability. Furthermore, themore » performance of the Cartesian parametrization is compared against the other parametrizations, with the results of these comparisons showing that in general, the Cartesian parametrization is more robust and more numerically efficient than the others.« less
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.
Parametric Methods for Dynamic 11C-Phenytoin PET Studies.
Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A
2017-03-01
In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
Fast computation of the multivariable stability margin for real interrelated uncertain parameters
NASA Technical Reports Server (NTRS)
Sideris, Athanasios; Sanchez Pena, Ricardo S.
1988-01-01
A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.
Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6
NASA Astrophysics Data System (ADS)
Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.
2017-01-01
This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
Some Open Issues on Rockfall Hazard Analysis in Fractured Rock Mass: Problems and Prospects
NASA Astrophysics Data System (ADS)
Ferrero, Anna Maria; Migliazza, Maria Rita; Pirulli, Marina; Umili, Gessica
2016-09-01
Risk is part of every sector of engineering design. It is a consequence of the uncertainties connected with the cognitive boundaries and with the natural variability of the relevant variables. In soil and rock engineering, in particular, uncertainties are linked to geometrical and mechanical aspects and the model used for the problem schematization. While the uncertainties due to the cognitive gaps could be filled by improving the quality of numerical codes and measuring instruments, nothing can be done to remove the randomness of natural variables, except defining their variability with stochastic approaches. Probabilistic analyses represent a useful tool to run parametric analyses and to identify the more significant aspects of a given phenomenon: They can be used for a rational quantification and mitigation of risk. The connection between the cognitive level and the probability of failure is at the base of the determination of hazard, which is often quantified through the assignment of safety factors. But these factors suffer from conceptual limits, which can be only overcome by adopting mathematical techniques with sound bases, not so used up to now (Einstein et al. in rock mechanics in civil and environmental engineering, CRC Press, London, 3-13, 2010; Brown in J Rock Mech Geotech Eng 4(3):193-204, 2012). The present paper describes the problems and the more reliable techniques used to quantify the uncertainties that characterize the large number of parameters that are involved in rock slope hazard assessment through a real case specifically related to rockfall. Limits of the existing approaches and future developments of the research are also provided.
Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.
Bias error reduction using ratios to baseline experiments. Heat transfer case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakroun, W.; Taylor, R.P.; Coleman, H.W.
1993-10-01
Employing a set of experiments devoted to examining the effect of surface finish (riblets) on convective heat transfer as an example, this technical note seeks to explore the notion that precision uncertainties in experiments can be reduced by repeated trials and averaging. This scheme for bias error reduction can give considerable advantage when parametric effects are investigated experimentally. When the results of an experiment are presented as a ratio with the baseline results, a large reduction in the overall uncertainty can be achieved when all the bias limits in the variables of the experimental result are fully correlated with thosemore » of the baseline case. 4 refs.« less
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Universality of Electron Distributions in Extensive Air Showers
NASA Astrophysics Data System (ADS)
Śmiałkowski, Andrzej; Giller, Maria
2018-02-01
Based on extensive air shower simulations, it is shown that electron distributions with respect to two angles determining the electron direction at a given shower age, for a fixed electron energy and lateral distance, are universal. This means that the distributions do not depend on the primary particle energy or mass (thus, neither on the interaction model), shower zenith angle, or shower to shower fluctuations, if they are taken at the same shower age. Together with previous work showing the universality of the distributions of the electron energy, lateral distance (integrated over angles), and angle (integrated over lateral distance) for fixed electron energy, this paper completes a full universal description of the electron states at various shower ages. Analytical parametrizations of the full electron states are given. It is also shown that some distributions can be described by a number of variables smaller than five, with the new ones being products of old ones raised to some power. The accuracy of the present parametrization is sufficiently good to apply to showers with a primary energy uncertainty of 14% (as is the case at the Pierre Auger Observatory). The shower fluctuations in the chosen bins of the multidimensional variable space are about 6%, determining the minimum uncertainty needed for the parametrization of the universal distributions. An analytical way of estimating the effect of the geomagnetic field is given. Thanks to the universality of the electron distribution in any shower, a new method of shower reconstruction can be worked out from the data from observatories using the fluorescence technique. The light fluxes (both fluorescence and Cherenkov) for any shower age can be exactly predicted for a shower with any primary energy and shower maximum depth, so that the two quantities can be obtained by best fitting the predictions to the measurements.
Goll, Daniel S.; Brovkin, Victor; Liski, Jari; ...
2015-08-12
The quantification of sources and sinks of carbon from land use and land cover changes (LULCC) is uncertain. We investigated how the parametrization of LULCC and of organic matter decomposition, as well as initial land cover, affects the historical and future carbon fluxes in an Earth System Model (ESM). Using the land component of the Max Planck Institute ESM, we found that the historical (1750–2010) LULCC flux varied up to 25% depending on the fraction of biomass which enters the atmosphere directly due to burning or is used in short-lived products. The uncertainty in the decadal LULCC fluxes of themore » recent past due to the parametrization of decomposition and direct emissions was 0.6 Pg C yr $-$1, which is 3 times larger than the uncertainty previously attributed to model and method in general. Preindustrial natural land cover had a larger effect on decadal LULCC fluxes than the aforementioned parameter sensitivity (1.0 Pg C yr $-$1). Regional differences between reconstructed and dynamically computed land covers, in particular, at low latitudes, led to differences in historical LULCC emissions of 84–114 Pg C, globally. This effect is larger than the effects of forest regrowth, shifting cultivation, or climate feedbacks and comparable to the effect of differences among studies in the terminology of LULCC. Finally, in general, we find that the practice of calibrating the net land carbon balance to provide realistic boundary conditions for the climate component of an ESM hampers the applicability of the land component outside its primary field of application.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goll, Daniel S.; Brovkin, Victor; Liski, Jari
The quantification of sources and sinks of carbon from land use and land cover changes (LULCC) is uncertain. We investigated how the parametrization of LULCC and of organic matter decomposition, as well as initial land cover, affects the historical and future carbon fluxes in an Earth System Model (ESM). Using the land component of the Max Planck Institute ESM, we found that the historical (1750–2010) LULCC flux varied up to 25% depending on the fraction of biomass which enters the atmosphere directly due to burning or is used in short-lived products. The uncertainty in the decadal LULCC fluxes of themore » recent past due to the parametrization of decomposition and direct emissions was 0.6 Pg C yr $-$1, which is 3 times larger than the uncertainty previously attributed to model and method in general. Preindustrial natural land cover had a larger effect on decadal LULCC fluxes than the aforementioned parameter sensitivity (1.0 Pg C yr $-$1). Regional differences between reconstructed and dynamically computed land covers, in particular, at low latitudes, led to differences in historical LULCC emissions of 84–114 Pg C, globally. This effect is larger than the effects of forest regrowth, shifting cultivation, or climate feedbacks and comparable to the effect of differences among studies in the terminology of LULCC. Finally, in general, we find that the practice of calibrating the net land carbon balance to provide realistic boundary conditions for the climate component of an ESM hampers the applicability of the land component outside its primary field of application.« less
Unconventional nozzle tradeoff study. [space tug propulsion
NASA Technical Reports Server (NTRS)
Obrien, C. J.
1979-01-01
Plug cluster engine design, performance, weight, envelope, operational characteristics, development cost, and payload capability, were evaluated and comparisons were made with other space tug engine candidates using oxygen/hydrogen propellants. Parametric performance data were generated for existing developed or high technology thrust chambers clustered around a plug nozzle of very large diameter. The uncertainties in the performance prediction of plug cluster engines with large gaps between the modules (thrust chambers) were evaluated. The major uncertainty involves, the aerodynamics of the flow from discrete nozzles, and the lack of this flow to achieve the pressure ratio corresponding to the defined area ratio for a plug cluster. This uncertainty was reduced through a cluster design that consists of a plug contour that is formed from the cluster of high area ratio bell nozzles that have been scarfed. Light-weight, high area ratio, bell nozzles were achieved through the use of AGCarb (carbon-carbon cloth) nozzle extensions.
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
Sliding mode control method having terminal convergence in finite time
NASA Technical Reports Server (NTRS)
Venkataraman, Subramanian T. (Inventor); Gulati, Sandeep (Inventor)
1994-01-01
An object of this invention is to provide robust nonlinear controllers for robotic operations in unstructured environments based upon a new class of closed loop sliding control methods, sometimes denoted terminal sliders, where the new class will enforce closed-loop control convergence to equilibrium in finite time. Improved performance results from the elimination of high frequency control switching previously employed for robustness to parametric uncertainties. Improved performance also results from the dependence of terminal slider stability upon the rate of change of uncertainties over the sliding surface rather than the magnitude of the uncertainty itself for robust control. Terminal sliding mode control also yields improved convergence where convergence time is finite and is to be controlled. A further object is to apply terminal sliders to robot manipulator control and benchmark performance with the traditional computed torque control method and provide for design of control parameters.
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Pechlivanidis, Illias; Breuer, Lutz; Wortmann, Michel; Vetter, Tobias; Flörke, Martina; Chamorro, Alejandro; Schäfer, David; Shah, Harsh; Zeng, Xiaofan
2016-04-01
The quantification of the predictive uncertainty in hydrologic models and their attribution to its main sources is of particular interest in climate change studies. In recent years, a number of studies have been aimed at assessing the ability of hydrologic models (HMs) to reproduce extreme hydrologic events. Disentangling the overall uncertainty of streamflow -including its derived low-flow characteristics- into individual contributions, stemming from forcings and model structure, has also been studied. Based on recent literature, it can be stated that there is a controversy with respect to which source is the largest (e.g., Teng, et al. 2012, Bosshard et al. 2013, Prudhomme et al. 2014). Very little has also been done to estimate the relative impact of the parametric uncertainty of the HMs with respect to overall uncertainty of low-flow characteristics. The ISI-MIP2 project provides a unique opportunity to understand the propagation of forcing and model structure uncertainties into century-long time series of drought characteristics. This project defines a consistent framework to deal with compatible initial conditions for the HMs and a set of standardized historical and future forcings. Moreover, the ensemble of hydrologic model predictions varies across a broad range of climate scenarios and regions. To achieve this goal, we use six preconditioned hydrologic models (HYPE or HBV, mHM, SWIM, VIC, and WaterGAP3) set up in seven large continental river basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine, Yellow. These models are forced with bias-corrected outputs of five CMIP5 general circulation models (GCM) under four extreme representative concentration pathway (RCP) scenarios (i.e. 2.6, 4.5, 6.0, and 8.5 Wm-2) for the period 1971-2099. Simulated streamflow is transformed into a monthly runoff index (RI) to analyze the attribution of the GCM and HM uncertainty into drought magnitude and duration over time. Uncertainty contributions are investigated during periods: 1) 2006-2035, 2) 2036-2065 and 3) 2070-2099. Results presented in Samaniego et al. 2015 (submitted) indicate that GCM uncertainty mostly dominates over HM uncertainty for predictions of runoff drought characteristics, irrespective of the selected RCP and region. For the mHM model, in particular, GCM uncertainty always dominates over parametric uncertainty. In general, the overall uncertainty increases with time. The larger the radiative forcing of the RCP, the larger the uncertainty in drought characteristics, however, the propagation of the GCM uncertainty onto a drought characteristic depends largely upon the hydro-climatic regime. While our study emphasizes the need for multi-model ensembles for the assessment of future drought projections, the agreement between GCM forcings is still weak to draw conclusive recommendations. References: L. Samaniego, R. Kumar, I. G. Pechlivanidis, L. Breuer, M. Wortmann, T. Vetter, M. Flörke, A. Chamorro, D. Schäfer, H. Shah, X. Zeng: Propagation of forcing and model uncertainty into hydrological drought characteristics in a multi-model century-long experiment in continental river basins. Submitted to Climatic Change on Dec 2015. Bosshard, et al. 2013. doi:10.1029/2011WR011533. Prudhomme et al. 2014, doi:10.1073/pnas.1222473110. Teng, et al. 2012, doi:10.1175/JHM-D-11-058.1.
Influencing agent group behavior by adjusting cultural trait values.
Tuli, Gaurav; Hexmoor, Henry
2010-10-01
Social reasoning and norms among individuals that share cultural traits are largely fashioned by those traits. We have explored predominant sociological and cultural traits. We offer a methodology for parametrically adjusting relevant traits. This exploratory study heralds a capability to deliberately tune cultural group traits in order to produce a desired group behavior. To validate our methodology, we implemented a prototypical-agent-based simulated test bed for demonstrating an exemplar from intelligence, surveillance, and reconnaissance scenario. A group of simulated agents traverses a hostile territory while a user adjusts their cultural group trait settings. Group and individual utilities are dynamically observed against parametric values for the selected traits. Uncertainty avoidance index and individualism are the cultural traits we examined in depth. Upon the user's training of the correspondence between cultural values and system utilities, users deliberately produce the desired system utilities by issuing changes to trait. Specific cultural traits are without meaning outside of their context. Efficacy and timely application of traits in a given context do yield desirable results. This paper heralds a path for the control of large systems via parametric cultural adjustments.
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a significant variation in response to the variation in input parameters. However, a substantial variation at regional and temporal scale has been found. Tatang M. A., Pan W., Prinn R G., McRae G. J., An efficient method for parametric uncertainty analysis of numerical geophysical models, J. Gephys. Res., 102, 21925-21932, 1997. Cohen, J.B., and R.G. Prinn, Development of a fast, urban chemistry metamodel for inclusion in global models,Atmos. Chem. Phys., 11, 7629-7656, doi:10.5194/acp-11-7629-2011, 2011. Emmons L. K., Walters S., Hess P. G., Lamarque J. -F., P_ster G. G., Fillmore D., Granier C., Guenther A., Kinnison D., Laepple T., Orlando J., Tie X., Tyndall G., Wiedinmyer C., Baughcum S. L., Kloster J. S., Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4). Geosci. Model Dev., 3, 4367, 2010.
O'Sullivan, Finbarr; Muzi, Mark; Spence, Alexander M; Mankoff, David M; O'Sullivan, Janet N; Fitzgerald, Niall; Newman, George C; Krohn, Kenneth A
2009-06-01
Kinetic analysis is used to extract metabolic information from dynamic positron emission tomography (PET) uptake data. The theory of indicator dilutions, developed in the seminal work of Meier and Zierler (1954), provides a probabilistic framework for representation of PET tracer uptake data in terms of a convolution between an arterial input function and a tissue residue. The residue is a scaled survival function associated with tracer residence in the tissue. Nonparametric inference for the residue, a deconvolution problem, provides a novel approach to kinetic analysis-critically one that is not reliant on specific compartmental modeling assumptions. A practical computational technique based on regularized cubic B-spline approximation of the residence time distribution is proposed. Nonparametric residue analysis allows formal statistical evaluation of specific parametric models to be considered. This analysis needs to properly account for the increased flexibility of the nonparametric estimator. The methodology is illustrated using data from a series of cerebral studies with PET and fluorodeoxyglucose (FDG) in normal subjects. Comparisons are made between key functionals of the residue, tracer flux, flow, etc., resulting from a parametric (the standard two-compartment of Phelps et al. 1979) and a nonparametric analysis. Strong statistical evidence against the compartment model is found. Primarily these differences relate to the representation of the early temporal structure of the tracer residence-largely a function of the vascular supply network. There are convincing physiological arguments against the representations implied by the compartmental approach but this is the first time that a rigorous statistical confirmation using PET data has been reported. The compartmental analysis produces suspect values for flow but, notably, the impact on the metabolic flux, though statistically significant, is limited to deviations on the order of 3%-4%. The general advantage of the nonparametric residue analysis is the ability to provide a valid kinetic quantitation in the context of studies where there may be heterogeneity or other uncertainty about the accuracy of a compartmental model approximation of the tissue residue.
NASA Astrophysics Data System (ADS)
Gayvoronskiy, S. A.; Ezangina, T. A.; Khozhaev, I. V.
2018-03-01
The paper is dedicated to examining dynamics of a submersible underwater garage in conditions of significant sea oscillation. During the considered research, the mathematical model of the electromechanical depth control system, considering interval parametric uncertainty of the system and distribution of tether mass, was developed. An influence of sea oscillation on submerging underwater garages and their depth stabilization processes was analyzed.
Certified Reduced Basis Model Characterization: a Frequentistic Uncertainty Framework
2011-01-11
14) It then follows that the Legendre coefficient random vector, (Z [0], Z [1], . . . , Z [I])(ω), is (I+1)– variate normally distributed with mean (δ...I. Note each two-sided inequality represents two constraints. 3. PDE-Based Statistical Inference We now proceed to the parametrized partial...appearance of defects or geometric variations relative to an initial baseline, or perhaps manufacturing departures from nominal specifications; if our
Stochastic Analysis and Design of Heterogeneous Microstructural Materials System
NASA Astrophysics Data System (ADS)
Xu, Hongyi
Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the dimension of microstructure design space. This dissertation develops a machine learning-based methodology to identify the key microstructure descriptors that highly impact properties of interest. In uncertainty quantification, a comparative study on data-driven random process models is conducted to provide guidance for choosing the most accurate model in statistical uncertainty quantification. Two new goodness-of-fit metrics are developed to provide quantitative measurements of random process models' accuracy. The benefits of the proposed methods are demonstrated by the example of designing the microstructure of polymer nanocomposites. This dissertation provides material-generic, intelligent modeling/design methodologies and techniques to accelerate the process of analyzing and designing new microstructural materials system.
Implementation of a new fuzzy vector control of induction motor.
Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz
2014-05-01
The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Problems of the design of low-noise input devices. [parametric amplifiers
NASA Technical Reports Server (NTRS)
Manokhin, V. M.; Nemlikher, Y. A.; Strukov, I. A.; Sharfov, Y. A.
1974-01-01
An analysis is given of the requirements placed on the elements of parametric centimeter waveband amplifiers for achievement of minimal noise temperatures. A low-noise semiconductor parametric amplifier using germanium parametric diodes for a receiver operating in the 4 GHz band was developed and tested confirming the possibility of satisfying all requirements.
Planck intermediate results. XLIX. Parity-violation constraints from polarization data
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Benabed, K.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Carron, J.; Chiang, H. C.; Colombo, L. P. L.; Comis, B.; Contreras, D.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Dusini, S.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Henrot-Versillé, S.; Herranz, D.; Hivon, E.; Huang, Z.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Langer, M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Levrier, F.; Liguori, M.; Lilje, P. B.; Lindholm, V.; López-Caniego, M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Morgante, G.; Moss, A.; Natoli, P.; Pagano, L.; Paoletti, D.; Patanchon, G.; Patrizii, L.; Perotto, L.; Pettorino, V.; Piacentini, F.; Polastri, L.; Polenta, G.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Sandri, M.; Savelainen, M.; Scott, D.; Sirignano, C.; Sirri, G.; Spencer, L. D.; Suur-Uski, A.-S.; Tauber, J. A.; Tavagnacco, D.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Wehus, I. K.; Zacchei, A.; Zonca, A.
2016-12-01
Parity-violating extensions of the standard electromagnetic theory cause in vacuo rotation of the plane of polarization of propagating photons. This effect, also known as cosmic birefringence, has an impact on the cosmic microwave background (CMB) anisotropy angular power spectra, producing non-vanishing T-B and E-B correlations that are otherwise null when parity is a symmetry. Here we present new constraints on an isotropic rotation, parametrized by the angle α, derived from Planck 2015 CMB polarization data. To increase the robustness of our analyses, we employ two complementary approaches, in harmonic space and in map space, the latter based on a peak stacking technique. The two approaches provide estimates for α that are in agreement within statistical uncertainties and are very stable against several consistency tests.Considering the T-B and E-B information jointly, we find α = 0fdg31 ± 0fdg05 ({stat.) ± 0fdg28 (syst.)} from the harmonic analysis and α = 0fdg35 ± 0fdg05 ({stat.) ± 0fdg28 (syst.)} from the stacking approach. These constraints are compatible with no parity violation and are dominated by the systematic uncertainty in the orientation of Planck's polarization-sensitive bolometers.
VizieR Online Data Catalog: Dark matter in dSph galaxies (Charbonnier+, 2011)
NASA Astrophysics Data System (ADS)
Charbonnier, A.; Combet, C.; Daniel, M.; Funk, S.; Hinton, J. A.; Maurin, D.; Power, C.; Read, J. I.; Sarkar, S.; Walker, M. G.; Wilkinson, M. I.
2012-07-01
Due to their large dynamical mass-to-light ratios, dwarf spheroidal galaxies (dSphs) are promising targets for the indirect detection of dark matter (DM) in γ-rays. We examine their detectability by present and future γ-ray observatories. The key innovative features of our analysis are as follows: (i) we take into account the angular size of the dSphs; while nearby objects have higher γ-ray flux, their larger angular extent can make them less attractive targets for background-dominated instruments; (ii) we derive DM profiles and the astrophysical J-factor (which parametrizes the expected γ-ray flux, independently of the choice of DM particle model) for the classical dSphs directly from photometric and kinematic data. We assume very little about the DM profile, modelling this as a smooth split-power-law distribution, with and without subclumps; (iii) we use a Markov chain Monte Carlo technique to marginalize over unknown parameters and determine the sensitivity of our derived J-factors to both model and measurement uncertainties; and (iv) we use simulated DM profiles to demonstrate that our J-factor determinations recover the correct solution within our quoted uncertainties. (6 data files).
Uncertain Environmental Footprint of Current and Future Battery Electric Vehicles.
Cox, Brian; Mutel, Christopher L; Bauer, Christian; Mendoza Beltran, Angelica; van Vuuren, Detlef P
2018-04-17
The future environmental impacts of battery electric vehicles (EVs) are very important given their expected dominance in future transport systems. Previous studies have shown these impacts to be highly uncertain, though a detailed treatment of this uncertainty is still lacking. We help to fill this gap by using Monte Carlo and global sensitivity analysis to quantify parametric uncertainty and also consider two additional factors that have not yet been addressed in the field. First, we include changes to driving patterns due to the introduction of autonomous and connected vehicles. Second, we deeply integrate scenario results from the IMAGE integrated assessment model into our life cycle database to include the impacts of changes to the electricity sector on the environmental burdens of producing and recharging future EVs. Future EVs are expected to have 45-78% lower climate change impacts than current EVs. Electricity used for charging is the largest source of variability in results, though vehicle size, lifetime, driving patterns, and battery size also strongly contribute to variability. We also show that it is imperative to consider changes to the electricity sector when calculating upstream impacts of EVs, as without this, results could be overestimated by up to 75%.
Planck intermediate results: XLIX. Parity-violation constraints from polarization data
Aghanim, N.; Ashdown, M.; Aumont, J.; ...
2016-12-12
Parity-violating extensions of the standard electromagnetic theory cause in vacuo rotation of the plane of polarization of propagating photons. This effect, also known as cosmic birefringence, has an impact on the cosmic microwave background (CMB) anisotropy angular power spectra, producing non-vanishing T-B and E-B correlations that are otherwise null when parity is a symmetry. Here we present new constraints on an isotropic rotation, parametrized by the angle α, derived from Planck 2015 CMB polarization data. To increase the robustness of our analyses, we employ two complementary approaches, in harmonic space and in map space, the latter based on a peakmore » stacking technique. The two approaches provide estimates for α that are in agreement within statistical uncertainties and are very stable against several consistency tests.Considering the T-B and E-B information jointly, we find α = 0°310°05(stat.)±0°28 (syst.) from the harmonic analysis and α = 0°350°05(stat.)±0°28 (syst.) from the stacking approach. These constraints are compatible with no parity violation and are dominated by the systematic uncertainty in the orientation of Planck's polarization-sensitive bolometers.« less
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Evaluating the lower-tropospheric COSMIC GPS radio occultation sounding quality over the Arctic
NASA Astrophysics Data System (ADS)
Yu, Xiao; Xie, Feiqin; Ao, Chi O.
2018-04-01
Lower-tropospheric moisture and temperature measurements are crucial for understanding weather prediction and climate change. Global Positioning System radio occultation (GPS RO) has been demonstrated as a high-quality observation technique with high vertical resolution and sub-kelvin temperature precision from the upper troposphere to the stratosphere. In the tropical lower troposphere, particularly the lowest 2 km, the quality of RO retrievals is known to be degraded and is a topic of active research. However, it is not clear whether similar problems exist at high latitudes, particularly over the Arctic, which is characterized by smooth ocean surface and often negligible moisture in the atmosphere. In this study, 3-year (2008-2010) GPS RO soundings from COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) over the Arctic (65-90° N) show uniform spatial sampling with average penetration depth within 300 m above the ocean surface. Over 70 % of RO soundings penetrate deep into the lowest 300 m of the troposphere in all non-summer seasons. However, the fraction of such deeply penetrating profiles reduces to only about 50-60 % in summer, when near-surface moisture and its variation increase. Both structural and parametric uncertainties of GPS RO soundings were also analyzed. The structural uncertainty (due to different data processing approaches) is estimated to be within ˜ 0.07 % in refractivity, ˜ 0.72 K in temperature, and ˜ 0.05 g kg-1 in specific humidity below 10 km, which is derived by comparing RO retrievals from two independent data processing centers. The parametric uncertainty (internal uncertainty of RO sounding) is quantified by comparing GPS RO with near-coincident radiosonde and European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim profiles. A systematic negative bias up to ˜ 1 % in refractivity below 2 km is only seen in the summer, which confirms the moisture impact on GPS RO quality.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
Dennis, Robin L.; Schwede, Donna B.; Bash, Jesse O.; Pleim, Jon E.; Walker, John T.; Foley, Kristen M.
2013-01-01
Reactive nitrogen (Nr) is removed by surface fluxes (air–surface exchange) and wet deposition. The chemistry and physics of the atmosphere result in a complicated system in which competing chemical sources and sinks exist and impact that removal. Therefore, uncertainties are best examined with complete regional chemical transport models that simulate these feedbacks. We analysed several uncertainties in regional air quality model resistance analogue representations of air–surface exchange for unidirectional and bi-directional fluxes and their effect on the continental Nr budget. Model sensitivity tests of key parameters in dry deposition formulations showed that uncertainty estimates of continental total nitrogen deposition are surprisingly small, 5 per cent or less, owing to feedbacks in the chemistry and rebalancing among removal pathways. The largest uncertainties (5%) occur with the change from a unidirectional to a bi-directional NH3 formulation followed by uncertainties in bi-directional compensation points (1–4%) and unidirectional aerodynamic resistance (2%). Uncertainties have a greater effect at the local scale. Between unidirectional and bi-directional formulations, single grid cell changes can be up to 50 per cent, whereas 84 per cent of the cells have changes less than 30 per cent. For uncertainties within either formulation, single grid cell change can be up to 20 per cent, but for 90 per cent of the cells changes are less than 10 per cent. PMID:23713122
Dennis, Robin L; Schwede, Donna B; Bash, Jesse O; Pleim, Jon E; Walker, John T; Foley, Kristen M
2013-07-05
Reactive nitrogen (Nr) is removed by surface fluxes (air-surface exchange) and wet deposition. The chemistry and physics of the atmosphere result in a complicated system in which competing chemical sources and sinks exist and impact that removal. Therefore, uncertainties are best examined with complete regional chemical transport models that simulate these feedbacks. We analysed several uncertainties in regional air quality model resistance analogue representations of air-surface exchange for unidirectional and bi-directional fluxes and their effect on the continental Nr budget. Model sensitivity tests of key parameters in dry deposition formulations showed that uncertainty estimates of continental total nitrogen deposition are surprisingly small, 5 per cent or less, owing to feedbacks in the chemistry and rebalancing among removal pathways. The largest uncertainties (5%) occur with the change from a unidirectional to a bi-directional NH3 formulation followed by uncertainties in bi-directional compensation points (1-4%) and unidirectional aerodynamic resistance (2%). Uncertainties have a greater effect at the local scale. Between unidirectional and bi-directional formulations, single grid cell changes can be up to 50 per cent, whereas 84 per cent of the cells have changes less than 30 per cent. For uncertainties within either formulation, single grid cell change can be up to 20 per cent, but for 90 per cent of the cells changes are less than 10 per cent.
Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L
2013-02-01
The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. Copyright © 2012 Elsevier Ltd. All rights reserved.
The structure of the proton in the LHC precision era
NASA Astrophysics Data System (ADS)
Gao, Jun; Harland-Lang, Lucian; Rojo, Juan
2018-05-01
We review recent progress in the determination of the parton distribution functions (PDFs) of the proton, with emphasis on the applications for precision phenomenology at the Large Hadron Collider (LHC). First of all, we introduce the general theoretical framework underlying the global QCD analysis of the quark and gluon internal structure of protons. We then present a detailed overview of the hard-scattering measurements, and the corresponding theory predictions, that are used in state-of-the-art PDF fits. We emphasize here the role that higher-order QCD and electroweak corrections play in the description of recent high-precision collider data. We present the methodology used to extract PDFs in global analyses, including the PDF parametrization strategy and the definition and propagation of PDF uncertainties. Then we review and compare the most recent releases from the various PDF fitting collaborations, highlighting their differences and similarities. We discuss the role that QED corrections and photon-initiated contributions play in modern PDF analysis. We provide representative examples of the implications of PDF fits for high-precision LHC phenomenological applications, such as Higgs coupling measurements and searches for high-mass New Physics resonances. We conclude this report by discussing some selected topics relevant for the future of PDF determinations, including the treatment of theoretical uncertainties, the connection with lattice QCD calculations, and the role of PDFs at future high-energy colliders beyond the LHC.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
Smith, Colin R; Vignos, Michael F; Lenhart, Rachel L; Kaiser, Jarred; Thelen, Darryl G
2016-02-01
The study objective was to investigate the influence of coronal plane alignment and ligament properties on total knee replacement (TKR) contact loads during walking. We created a subject-specific knee model of an 83-year-old male who had an instrumented TKR. The knee model was incorporated into a lower extremity musculoskeletal model and included deformable contact, ligamentous structures, and six degrees-of-freedom (DOF) tibiofemoral and patellofemoral joints. A novel numerical optimization technique was used to simultaneously predict muscle forces, secondary knee kinematics, ligament forces, and joint contact pressures from standard gait analysis data collected on the subject. The nominal knee model predictions of medial, lateral, and total contact forces during gait agreed well with TKR measures, with root-mean-square (rms) errors of 0.23, 0.22, and 0.33 body weight (BW), respectively. Coronal plane component alignment did not affect total knee contact loads, but did alter the medial-lateral load distribution, with 4 deg varus and 4 deg valgus rotations in component alignment inducing +17% and -23% changes in the first peak medial tibiofemoral contact forces, respectively. A Monte Carlo analysis showed that uncertainties in ligament stiffness and reference strains induce ±0.2 BW uncertainty in tibiofemoral force estimates over the gait cycle. Ligament properties had substantial influence on the TKR load distributions, with the medial collateral ligament and iliotibial band (ITB) properties having the largest effects on medial and lateral compartment loading, respectively. The computational framework provides a viable approach for virtually designing TKR components, considering parametric uncertainty and predicting the effects of joint alignment and soft tissue balancing procedures on TKR function during movement.
Smith, Colin R.; Vignos, Michael F.; Lenhart, Rachel L.; Kaiser, Jarred; Thelen, Darryl G.
2016-01-01
The study objective was to investigate the influence of coronal plane alignment and ligament properties on total knee replacement (TKR) contact loads during walking. We created a subject-specific knee model of an 83-year-old male who had an instrumented TKR. The knee model was incorporated into a lower extremity musculoskeletal model and included deformable contact, ligamentous structures, and six degrees-of-freedom (DOF) tibiofemoral and patellofemoral joints. A novel numerical optimization technique was used to simultaneously predict muscle forces, secondary knee kinematics, ligament forces, and joint contact pressures from standard gait analysis data collected on the subject. The nominal knee model predictions of medial, lateral, and total contact forces during gait agreed well with TKR measures, with root-mean-square (rms) errors of 0.23, 0.22, and 0.33 body weight (BW), respectively. Coronal plane component alignment did not affect total knee contact loads, but did alter the medial–lateral load distribution, with 4 deg varus and 4 deg valgus rotations in component alignment inducing +17% and −23% changes in the first peak medial tibiofemoral contact forces, respectively. A Monte Carlo analysis showed that uncertainties in ligament stiffness and reference strains induce ±0.2 BW uncertainty in tibiofemoral force estimates over the gait cycle. Ligament properties had substantial influence on the TKR load distributions, with the medial collateral ligament and iliotibial band (ITB) properties having the largest effects on medial and lateral compartment loading, respectively. The computational framework provides a viable approach for virtually designing TKR components, considering parametric uncertainty and predicting the effects of joint alignment and soft tissue balancing procedures on TKR function during movement. PMID:26769446
NASA Astrophysics Data System (ADS)
Roobaert, Alizee; Laruelle, Goulven; Landschützer, Peter; Regnier, Pierre
2017-04-01
In lakes, rivers, estuaries and the ocean, the quantification of air-water CO2 exchange (FCO2) is still characterized by large uncertainties partly due to the lack of agreement over the parameterization of the gas exchange velocity (k). Although the ocean is generally regarded as the best constrained system because k is only controlled by the wind speed, numerous formulations are still currently used, leading to potentially large differences in FCO2. Here, a quantitative global spatial analysis of FCO2 is presented using several k-wind speed formulations in order to compare the effect of the choice of parameterization of k on FCO2. This analysis is performed at a 1 degree resolution using a sea surface pCO2 product generated using a two-step artificial neuronal network by Landschützer et al. (2015) over the 1991-2011 period. Four different global wind speed datasets (CCMP, ERA, NCEP 1 and NCEP 2) are also used to assess the effect of the choice of one wind speed product over the other when calculating the global and regional oceanic FCO2. Results indicate that this choice of wind speed product only leads to small discrepancies globally (6 %) except with NCEP 2 which produces a more intense global FCO2 compared to the other wind products. Regionally, theses differences are even more pronounced. For a given wind speed product, the choice of parametrization of k yields global FCO2 differences ranging from 7 % to 16 % depending on the wind product used. We also provide latitudinal profiles of FCO2 and its uncertainty calculated combining all combinations between the different k-relationships and the four wind speed products. Wind speeds >14 m s-1, which only account for 7 % of all observations, contributes disproportionately to the global oceanic FCO2 and, for this range of wind speeds, the uncertainty induced by the choice of formulation for k is maximum ( 50 %).
Influence of model reduction on uncertainty of flood inundation predictions
NASA Astrophysics Data System (ADS)
Romanowicz, R. J.; Kiczko, A.; Osuch, M.
2012-04-01
Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.
2011-12-01
In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
Schmidt, K; Witte, H
1999-11-01
Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.
Flexural analysis of palm fiber reinforced hybrid polymer matrix composite
NASA Astrophysics Data System (ADS)
Venkatachalam, G.; Gautham Shankar, A.; Raghav, Dasarath; Santhosh Kiran, R.; Mahesh, Bhargav; Kumar, Krishna
2015-07-01
Uncertainty in availability of fossil fuels in the future and global warming increased the need for more environment friendly materials. In this work, an attempt is made to fabricate a hybrid polymer matrix composite. The blend is a mixture of General Purpose Resin and Cashew Nut Shell Liquid, a natural resin extracted from cashew plant. Palm fiber, which has high strength, is used as reinforcement material. The fiber is treated with alkali (NaOH) solution to increase its strength and adhesiveness. Parametric study of flexure strength is carried out by varying alkali concentration, duration of alkali treatment and fiber volume. Taguchi L9 Orthogonal array is followed in the design of experiments procedure for simplification. With the help of ANOVA technique, regression equations are obtained which gives the level of influence of each parameter on the flexure strength of the composite.
NASA Astrophysics Data System (ADS)
Takeya, Kouichi; Sasaki, Eiichi; Kobayashi, Yusuke
2016-01-01
A bridge vibration energy harvester has been proposed in this paper using a tuned dual-mass damper system, named hereafter Tuned Mass Generator (TMG). A linear electromagnetic transducer has been applied to harvest and make use of the unused reserve of energy the aforementioned damper system absorbs. The benefits of using dual-mass systems over single-mass systems for power generation have been clarified according to the theory of vibrations. TMG parameters have been determined considering multi-domain parameters, and TMG has been tuned using a newly proposed parameter design method. Theoretical analysis results have shown that for effective energy harvesting, it is essential that TMG has robustness against uncertainties in bridge vibrations and tuning errors, and the proposed parameter design method for TMG has demonstrated this feature.
Stochastic Model of Seasonal Runoff Forecasts
NASA Astrophysics Data System (ADS)
Krzysztofowicz, Roman; Watada, Leslie M.
1986-03-01
Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.
First-Order Parametric Model of Reflectance Spectra for Dyed Fabrics
2016-02-19
Unclassified Unlimited 31 Daniel Aiken (202) 279-5293 Parametric modeling Inverse /direct analysis This report describes a first-order parametric model of...Appendix: Dielectric Response Functions for Dyes Obtained by Inverse Analysis ……………………………...…………………………………………………….19 1 First-Order Parametric...which provides for both their inverse and direct modeling1. The dyes considered contain spectral features that are of interest to the U.S. Navy for
NASA Astrophysics Data System (ADS)
Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.
2017-12-01
Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-01-01
Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477
Minsley, Burke J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.
Path planning in uncertain flow fields using ensemble method
NASA Astrophysics Data System (ADS)
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-10-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.
Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A
2001-01-01
This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.
Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.
2005-01-01
NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.
Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.
2007-01-01
NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.
Parametric versus Cox's model: an illustrative analysis of divorce in Canada.
Balakrishnan, T R; Rao, K V; Krotki, K J; Lapierre-adamcyk, E
1988-06-01
Recent demographic literature clearly recognizes the importance of survival modes in the analysis of cross-sectional event histories. Of the various survival models, Cox's (1972) partial parametric model has been very popular due to its simplicity, and readily available computer software for estimation, sometimes at the cost of precision and parsimony of the model. This paper focuses on parametric failure time models for event history analysis such as Weibell, lognormal, loglogistic, and exponential models. The authors also test the goodness of fit of these parametric models versus the Cox's proportional hazards model taking Kaplan-Meier estimate as base. As an illustration, the authors reanalyze the Canadian Fertility Survey data on 1st marriage dissolution with parametric models. Though these parametric model estimates were not very different from each other, there seemed to be a slightly better fit with loglogistic. When 8 covariates were used in the analysis, it was found that the coefficients were similar in the models, and the overall conclusions about the relative risks would not have been different. The findings reveal that in marriage dissolution, the differences according to demographic and socioeconomic characteristics may be far more important than is generally found in many studies. Therefore, one should not treat the population as homogeneous in analyzing survival probabilities of marriages, other than for cursory analysis of overall trends.
Entry, Descent, and Landing technological barriers and crewed MARS vehicle performance analysis
NASA Astrophysics Data System (ADS)
Subrahmanyam, Prabhakar; Rasky, Daniel
2017-05-01
Mars has been explored historically only by robotic crafts, but a crewed mission encompasses several new engineering challenges - high ballistic coefficient entry, hypersonic decelerators, guided entry for reaching intended destinations within acceptable margins for error in the landing ellipse, and payload mass are all critical factors for evaluation. A comprehensive EDL parametric analysis has been conducted in support of a high mass landing architecture by evaluating three types of vehicles -70° Sphere Cone, Ellipsled and SpaceX hybrid architecture called Red Dragon as potential candidate options for crewed entry vehicles. Aerocapture at the Martian orbit of about 400 km and subsequent Entry-from-orbit scenarios were investigated at velocities of 6.75 km/s and 4 km/s respectively. A study on aerocapture corridor over a range of entry velocities (6-9 km/s) suggests that a hypersonic L/D of 0.3 is sufficient for a Martian aerocapture. Parametric studies conducted by varying aeroshell diameters from 10 m to 15 m for several entry masses up to 150 mt are summarized and results reveal that vehicles with entry masses in the range of about 40-80 mt are capable of delivering cargo with a mass on the order of 5-20 mt. For vehicles with an entry mass of 20 mt to 80 mt, probabilistic Monte Carlo analysis of 5000 cases for each vehicle were run to determine the final landing ellipse and to quantify the statistical uncertainties associated with the trajectory and attitude conditions during atmospheric entry. Strategies and current technological challenges for a human rated Entry, Descent, and Landing to the Martian surface are presented in this study.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Acoustic emission based damage localization in composites structures using Bayesian identification
NASA Astrophysics Data System (ADS)
Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.
2017-05-01
Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.
EEG Correlates of Fluctuation in Cognitive Performance in an Air Traffic Control Task
2014-11-01
using non-parametric statistical analysis to identify neurophysiological patterns due to the time-on-task effect. Significant changes in EEG power...EEG, Cognitive Performance, Power Spectral Analysis , Non-Parametric Analysis Document is available to the public through the Internet...3 Performance Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 EEG
Fan, Zhen; Dani, Melanie; Femminella, Grazia D; Wood, Melanie; Calsolaro, Valeria; Veronese, Mattia; Turkheimer, Federico; Gentleman, Steve; Brooks, David J; Hinz, Rainer; Edison, Paul
2018-07-01
Neuroinflammation and microglial activation play an important role in amnestic mild cognitive impairment (MCI) and Alzheimer's disease. In this study, we investigated the spatial distribution of neuroinflammation in MCI subjects, using spectral analysis (SA) to generate parametric maps and quantify 11 C-PBR28 PET, and compared these with compartmental and other kinetic models of quantification. Thirteen MCI and nine healthy controls were enrolled in this study. Subjects underwent 11 C-PBR28 PET scans with arterial cannulation. Spectral analysis with an arterial plasma input function was used to generate 11 C-PBR28 parametric maps. These maps were then compared with regional 11 C-PBR28 V T (volume of distribution) using a two-tissue compartment model and Logan graphic analysis. Amyloid load was also assessed with 18 F-Flutemetamol PET. With SA, three component peaks were identified in addition to blood volume. The 11 C-PBR28 impulse response function (IRF) at 90 min produced the lowest coefficient of variation. Single-subject analysis using this IRF demonstrated microglial activation in five out of seven amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake revealed a group-wise significant increase in neuroinflammation in amyloid-positive MCI subjects versus HC in multiple cortical association areas, and particularly in the temporal lobe. Interestingly, compartmental analysis detected group-wise increase in 11 C-PBR28 binding in the thalamus of amyloid-positive MCI subjects, while Logan parametric maps did not perform well. This study demonstrates for the first time that spectral analysis can be used to generate parametric maps of 11 C-PBR28 uptake, and is able to detect microglial activation in amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake allow voxel-wise single-subject analysis and could be used to evaluate microglial activation in individual subjects.
Parametric Modelling of As-Built Beam Framed Structure in Bim Environment
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.
2017-02-01
A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Ebner, Jacqueline H; Labatut, Rodrigo A; Rankin, Matthew J; Pronto, Jennifer L; Gooch, Curt A; Williamson, Anahita A; Trabold, Thomas A
2015-09-15
Anaerobic codigestion (AcoD) can address food waste disposal and manure management issues while delivering clean, renewable energy. Quantifying greenhouse gas (GHG) emissions due to implementation of AcoD is important to achieve this goal. A lifecycle analysis was performed on the basis of data from an on-farm AcoD in New York, resulting in a 71% reduction in GHG, or net reduction of 37.5 kg CO2e/t influent relative to conventional treatment of manure and food waste. Displacement of grid electricity provided the largest reduction, followed by avoidance of alternative food waste disposal options and reduced impacts associated with storage of digestate vs undigested manure. These reductions offset digester emissions and the net increase in emissions associated with land application in the AcoD case relative to the reference case. Sensitivity analysis showed that using feedstock diverted from high impact disposal pathways, control of digester emissions, and managing digestate storage emissions were opportunities to improve the AcoD GHG benefits. Regional and parametrized emissions factors for the storage emissions and land application phases would reduce uncertainty.
Performance assessments of nuclear waste repositories--A dialogue on their value and limitations
Ewing, Rodney C.; Tierney, Martin S.; Konikow, Leonard F.; Rechard, Rob P.
1999-01-01
Performance Assessment (PA) is the use of mathematical models to simulate the long-term behavior of engineered and geologic barriers in a nuclear waste repository; methods of uncertainty analysis are used to assess effects of parametric and conceptual uncertainties associated with the model system upon the uncertainty in outcomes of the simulation. PA is required by the U.S. Environmental Protection Agency as part of its certification process for geologic repositories for nuclear waste. This paper is a dialogue to explore the value and limitations of PA. Two “skeptics” acknowledge the utility of PA in organizing the scientific investigations that are necessary for confident siting and licensing of a repository; however, they maintain that the PA process, at least as it is currently implemented, is an essentially unscientific process with shortcomings that may provide results of limited use in evaluating actual effects on public health and safety. Conceptual uncertainties in a PA analysis can be so great that results can be confidently applied only over short time ranges, the antithesis of the purpose behind long-term, geologic disposal. Two “proponents” of PA agree that performance assessment is unscientific, but only in the sense that PA is an engineering analysis that uses existing scientific knowledge to support public policy decisions, rather than an investigation intended to increase fundamental knowledge of nature; PA has different goals and constraints than a typical scientific study. The “proponents” describe an ideal, sixstep process for conducting generalized PA, here called probabilistic systems analysis (PSA); they note that virtually all scientific content of a PA is introduced during the model-building steps of a PSA, they contend that a PA based on simple but scientifically acceptable mathematical models can provide useful and objective input to regulatory decision makers. The value of the results of any PA must lie between these two views and will depend on the level of knowledge of the site, the degree to which models capture actual physical and chemical processes, the time over which extrapolations are made, and the proper evaluation of health risks attending implementation of the repository. The challenge is in evaluating whether the quality of the PA matches the needs of decision makers charged with protecting the health and safety of the public.
Strong stabilization servo controller with optimization of performance criteria.
Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor
2011-07-01
Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A new simple form of quark mixing matrix
NASA Astrophysics Data System (ADS)
Qin, Nan; Ma, Bo-Qiang
2011-01-01
Although different parametrizations of quark mixing matrix are mathematically equivalent, the consequences of experimental analysis may be distinct. Based on the triminimal expansion of Kobayashi-Maskawa matrix around the unit matrix, we propose a new simple parametrization. Compared with the Wolfenstein parametrization, we find that the new form is not only consistent with the original one in the hierarchical structure, but also more convenient for numerical analysis and measurement of the CP-violating phase. By discussing the relation between our new form and the unitarity boomerang, we point out that along with the unitarity boomerang, this new parametrization is useful in hunting for new physics.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.
2014-05-01
The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily performed to understand the influence of the model characteristics on the computed ground shaking scenarios. For massive parametric tests, or for the repeated generation of large scale hazard maps, the methodology can take advantage of more advanced computational platforms, ranging from GRID computing infrastructures to HPC dedicated clusters up to Cloud computing. In such a way, scientists can deal efficiently with the variety and complexity of the potential earthquake sources, and perform parametric studies to characterize the related uncertainties. NDSHA provides realistic time series of expected ground motion readily applicable for seismic engineering analysis and other mitigation actions. The methodology has been successfully applied to strategic buildings, lifelines and cultural heritage sites, and for the purpose of seismic microzoning in several urban areas worldwide. A web application is currently being developed that facilitates the access to the NDSHA methodology and the related outputs by end-users, who are interested in reliable territorial planning and in the design and construction of buildings and infrastructures in seismic areas. At the same, the web application is also shaping up as an advanced educational tool to explore interactively how seismic waves are generated at the source, propagate inside structural models, and build up ground shaking scenarios. We illustrate the preliminary results obtained from a multiscale application of NDSHA approach to the territory of India, zooming from large scale hazard maps of ground shaking at bedrock, to the definition of local scale earthquake scenarios for selected sites in the Gujarat state (NW India). The study aims to provide the community (e.g. authorities and engineers) with advanced information for earthquake risk mitigation, which is particularly relevant to Gujarat in view of the rapid development and urbanization of the region.
Panaceas, uncertainty, and the robust control framework in sustainability science
Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan
2007-01-01
A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574
NASA Astrophysics Data System (ADS)
Norton, Alexander J.; Rayner, Peter J.; Koffi, Ernest N.; Scholze, Marko
2018-04-01
The synthesis of model and observational information using data assimilation can improve our understanding of the terrestrial carbon cycle, a key component of the Earth's climate-carbon system. Here we provide a data assimilation framework for combining observations of solar-induced chlorophyll fluorescence (SIF) and a process-based model to improve estimates of terrestrial carbon uptake or gross primary production (GPP). We then quantify and assess the constraint SIF provides on the uncertainty in global GPP through model process parameters in an error propagation study. By incorporating 1 year of SIF observations from the GOSAT satellite, we find that the parametric uncertainty in global annual GPP is reduced by 73 % from ±19.0 to ±5.2 Pg C yr-1. This improvement is achieved through strong constraint of leaf growth processes and weak to moderate constraint of physiological parameters. We also find that the inclusion of uncertainty in shortwave down-radiation forcing has a net-zero effect on uncertainty in GPP when incorporated into the SIF assimilation framework. This study demonstrates the powerful capacity of SIF to reduce uncertainties in process-based model estimates of GPP and the potential for improving our predictive capability of this uncertain carbon flux.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
NASA Astrophysics Data System (ADS)
Cooper, W. A.; Spuler, S. M.; Spowart, M.; Lenschow, D. H.; Friesen, R. B.
2014-03-01
A new laser air-motion sensor measures the true airspeed with an uncertainty of less than 0.1 m s-1 (standard error) and so reduces uncertainty in the measured component of the relative wind along the longitudinal axis of the aircraft to about the same level. The calculated pressure expected from that airspeed at the inlet of a pitot tube then provides a basis for calibrating the measurements of dynamic and static pressure, reducing standard-error uncertainty in those measurements to less than 0.3 hPa and the precision applicable to steady flight conditions to about 0.1 hPa. These improved measurements of pressure, combined with high-resolution measurements of geometric altitude from the Global Positioning System, then indicate (via integrations of the hydrostatic equation during climbs and descents) that the offset and uncertainty in temperature measurement for one research aircraft are +0.3 ± 0.3 °C. For airspeed, pressure and temperature these are significant reductions in uncertainty vs. those obtained from calibrations using standard techniques. Finally, it is shown that the new laser air-motion sensor, combined with parametrized fits to correction factors for the measured dynamic and ambient pressure, provides a measurement of temperature that is independent of any other temperature sensor.
Model-free aftershock forecasts constructed from similar sequences in the past
NASA Astrophysics Data System (ADS)
van der Elst, N.; Page, M. T.
2017-12-01
The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.
NASA Astrophysics Data System (ADS)
Yuan, X.; Wang, L.; Zhang, M.
2017-12-01
Rainfall deficit in the crop growing seasons is usually accompanied by heat waves. Abnormally high temperature increases evapotranspiration and decreases soil moisture rapidly, and ultimately results in a type of drought with a rapid onset, short duration but devastating impact, which is called "Flash drought". With the increase in global temperature, flash drought is expected to occur more frequently. However, there is no consensus on the definition of flash drought so far. Moreover, large uncertainty exists in the estimation of the flash drought and its trend, and the underlying mechanism for its long-term change is not clear. In this presentation, a parametric multivariate drought index that characterizes the joint probability distribution of key variables of flash drought will be developed, and the historical changes in flash drought over China will be analyzed. In addition, a set of land surface model simulations driven by IPCC CMIP5 models with different forcings and future scenarios, will be used for the detection and attribution of flash drought change. This study is targeted at quantifying the influences of natural and anthropogenic climate change on the flash drought change, projecting its future change as well as the corresponding uncertainty, and improving our understanding of the variation of flash drought and its underlying mechanism in a changing climate.
NASA Technical Reports Server (NTRS)
Stewart, R. B.; Grose, W. L.
1975-01-01
Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.
NASA Astrophysics Data System (ADS)
Ševeček, P.; Brož, M.; Nesvorný, D.; Enke, B.; Durda, D.; Walsh, K.; Richardson, D. C.
2017-11-01
We report on our study of asteroidal breakups, i.e. fragmentations of targets, subsequent gravitational reaccumulation and formation of small asteroid families. We focused on parent bodies with diameters Dpb = 10km . Simulations were performed with a smoothed-particle hydrodynamics (SPH) code combined with an efficient N-body integrator. We assumed various projectile sizes, impact velocities and impact angles (125 runs in total). Resulting size-frequency distributions are significantly different from scaled-down simulations with Dpb = 100km targets (Durda et al., 2007). We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions for N-body simulations of small asteroid families. Finally, we discuss a number of uncertainties related to SPH simulations.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
NASA Astrophysics Data System (ADS)
Sun, Liang; Zheng, Zewei
2017-04-01
An adaptive relative pose control strategy is proposed for a pursue spacecraft in proximity operations on a tumbling target. Relative position vector between two spacecraft is required to direct towards the docking port of the target while the attitude of them must be synchronized. With considering the thrust misalignment of pursuer, an integrated controller for relative translational and relative rotational dynamics is developed by using norm-wise adaptive estimations. Parametric uncertainties, unknown coupled dynamics, and bounded external disturbances are compensated online by adaptive update laws. It is proved via Lyapunov stability theory that the tracking errors of relative pose converge to zero asymptotically. Numerical simulations including six degrees-of-freedom rigid body dynamics are performed to demonstrate the effectiveness of the proposed controller.
Probabilistic Reasoning for Plan Robustness
NASA Technical Reports Server (NTRS)
Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.
2005-01-01
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.
Carbon cycle uncertainty in the Alaskan Arctic
NASA Astrophysics Data System (ADS)
Fisher, J. B.; Sikka, M.; Oechel, W. C.; Huntzinger, D. N.; Melton, J. R.; Koven, C. D.; Ahlström, A.; Arain, A. M.; Baker, I.; Chen, J. M.; Ciais, P.; Davidson, C.; Dietze, M.; El-Masri, B.; Hayes, D.; Huntingford, C.; Jain, A.; Levy, P. E.; Lomas, M. R.; Poulter, B.; Price, D.; Sahoo, A. K.; Schaefer, K.; Tian, H.; Tomelleri, E.; Verbeeck, H.; Viovy, N.; Wania, R.; Zeng, N.; Miller, C. E.
2014-02-01
Climate change is leading to a disproportionately large warming in the high northern latitudes, but the magnitude and sign of the future carbon balance of the Arctic are highly uncertain. Using 40 terrestrial biosphere models for Alaska, we provide a baseline of terrestrial carbon cycle structural and parametric uncertainty, defined as the multi-model standard deviation (σ) against the mean (x\\bar) for each quantity. Mean annual uncertainty (σ/x\\bar) was largest for net ecosystem exchange (NEE) (-0.01± 0.19 kg C m-2 yr-1), then net primary production (NPP) (0.14 ± 0.33 kg C m-2 yr-1), autotrophic respiration (Ra) (0.09 ± 0.20 kg C m-2 yr-1), gross primary production (GPP) (0.22 ± 0.50 kg C m-2 yr-1), ecosystem respiration (Re) (0.23 ± 0.38 kg C m-2 yr-1), CH4 flux (2.52 ± 4.02 g CH4 m-2 yr-1), heterotrophic respiration (Rh) (0.14 ± 0.20 kg C m-2 yr-1), and soil carbon (14.0± 9.2 kg C m-2). The spatial patterns in regional carbon stocks and fluxes varied widely with some models showing NEE for Alaska as a strong carbon sink, others as a strong carbon source, while still others as carbon neutral. Additionally, a feedback (i.e., sensitivity) analysis was conducted of 20th century NEE to CO2 fertilization (β) and climate (γ), which showed that uncertainty in γ was 2x larger than that of β, with neither indicating that the Alaskan Arctic is shifting towards a certain net carbon sink or source. Finally, AmeriFlux data are used at two sites in the Alaskan Arctic to evaluate the regional patterns; observed seasonal NEE was captured within multi-model uncertainty. This assessment of carbon cycle uncertainties may be used as a baseline for the improvement of experimental and modeling activities, as well as a reference for future trajectories in carbon cycling with climate change in the Alaskan Arctic.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.
2015-07-16
We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.
2018-03-08
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
NASA Astrophysics Data System (ADS)
Kaufman, M. C.; Lau, C.; Hanson, G. R.
2018-03-01
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system can be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.
New Products and Perspectives from the Global Precipitation Measurement (GPM) Mission
NASA Astrophysics Data System (ADS)
Kummerow, C. D.; Randel, D.; Petkovic, V.
2016-12-01
The Global Precipitation Measurement (GPM) mission was launched in February 2014 as a joint mission between JAXA from Japan and NASA from the United States. GPM carries a state of the art dual-frequency precipitation radar and a multi-channel passive microwave radiometer that acts not only to enhance the radar's retrieval capability, but also as a reference for a constellation of existing satellites carrying passive microwave sensors. In March of 2016, GPM released Version 4 of its precipitation products that consists of radar, radiometer, and combined radar/radiometer products. The radiometer algorithm in Version 4 is the first time a fully parametric algorithm has been implemented. This talk will focus on the consistency among the constellation radiometers, and what these inconsistencies can tell us about the fundamental uncertainties within the rainfall products. This analysis will be used to then drive a bigger picture of how GPM's latest results inform the Global Water and Energy budgets.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Aab, Alexander
2014-12-31
We report a study of the distributions of the depth of maximum, X max, of extensive air-shower profiles with energies above 10 17.8 eV as observed with the fluorescence telescopes of the Pierre Auger Observatory. The analysis method for selecting a data sample with minimal sampling bias is described in detail as well as the experimental cross-checks and systematic uncertainties. Furthermore, we discuss the detector acceptance and the resolution of the X max measurement and provide parametrizations thereof as a function of energy. Finally, the energy dependence of the mean and standard deviation of the X max distributions are comparedmore » to air-shower simulations for different nuclear primaries and interpreted in terms of the mean and variance of the logarithmic mass distribution at the top of the atmosphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Fang, Hongwei; Xu, Xingya
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). We formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions, tomore » obtain statistical descriptions of the P concentration and retention in the TGR. The correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. This study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Fang, Hongwei; Xu, Xingya
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
Huang, Lei; Fang, Hongwei; Xu, Xingya; ...
2017-08-01
Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). Here, we formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions,more » to obtain statistical descriptions of the P concentration and retention in the TGR. Furthermore, the correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. Our study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.« less
Bayesian nonparametric adaptive control using Gaussian processes.
Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A
2015-03-01
Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.
Model-based nonlinear control of hydraulic servo systems: Challenges, developments and perspectives
NASA Astrophysics Data System (ADS)
Yao, Jianyong
2018-06-01
Hydraulic servo system plays a significant role in industries, and usually acts as a core point in control and power transmission. Although linear theory-based control methods have been well established, advanced controller design methods for hydraulic servo system to achieve high performance is still an unending pursuit along with the development of modern industry. Essential nonlinearity is a unique feature and makes model-based nonlinear control more attractive, due to benefit from prior knowledge of the servo valve controlled hydraulic system. In this paper, a discussion for challenges in model-based nonlinear control, latest developments and brief perspectives of hydraulic servo systems are presented: Modelling uncertainty in hydraulic system is a major challenge, which includes parametric uncertainty and time-varying disturbance; some specific requirements also arise ad hoc difficulties such as nonlinear friction during low velocity tracking, severe disturbance, periodic disturbance, etc.; to handle various challenges, nonlinear solutions including parameter adaptation, nonlinear robust control, state and disturbance observation, backstepping design and so on, are proposed and integrated, theoretical analysis and lots of applications reveal their powerful capability to solve pertinent problems; and at the end, some perspectives and associated research topics (measurement noise, constraints, inner valve dynamics, input nonlinearity, etc.) in nonlinear hydraulic servo control are briefly explored and discussed.
Mid-frequency Band Dynamics of Large Space Structures
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.; Adams, Douglas S.
2004-01-01
High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.
The Historical and InstruMental SEismic cataLogue for France (HIMSELF)
NASA Astrophysics Data System (ADS)
Manchuel, Kevin; Traversa, Paola; Baumont, David; Cara, Michel; Nayman, Emmanuelle; Durouchoux, Christophe
2017-04-01
In regions that undergo low deformation rates, as it is the case for metropolitan France, the use of historical seismicity, in addition to instrumental one, is necessary when dealing with seismic hazard assessment. The goal is to extend the observation time window to better assess the seismogenic behavior of the crust and of specific geological structures. This paper presents the strategy adopted to develop a parametric earthquake catalogue using Mw as the reference magnitude scale that covers the Metropolitan France for both instrumental and historical times. Works performed in the frame of the SiHex (Cara et al., 2015) and SIGMA projects (EDF-CEA-AREVA-ENEL), respectively on instrumental and historical earthquakes, are combined to produce the Historical and InstruMental SEismic cataLogue for France (HIMSELF). The SiHex catalogue is composed of 40 000 natural earthquakes, for which hypocentral location (inferred from 1D homogeneous location process and observatories regional estimates) and Mw magnitude (from specific analysis on crustal waves coda - ML-LDG> 4.0 - and magnitudes conversions laws) are given. In the frame of the SIGMA research program, an integrated study is realized on historical seismicity from Empirical Macroseismic Prediction Equations (EMPEs) calibration in Mw (Baumont et al., submitted) to their application to earthquakes of the SISFRANCE macroseismic database (BRGM, EDF, IRSN), through a dedicated strategy developed by Traversa et al. (submitted) to compute their Mw magnitude and depth. This inversion process allows taking into account the main macroseismic field specificities reported by SISFRANCE with a Logic Tree (LT) approach. It also permits to capture epistemic uncertainties associated to macroseismic data and to EMPEs selection. For events that exhibit a poorly constrained macroseismic field (mainly old, cross border or at sea earthquakes) joint inversion of Mw and depth is not possible and a priori depth needs to be set to calculate Mw. Regional a priori depths are defined here based on analysis of the distribution of depths computed for earthquakes with a well constrained macroseismic field and for which joint inversion of Mw and depth is possible. At the end, 27% of SISFRANCE earthquake seismological parameters are jointly inverted and for the other 73% Mw are calculated assuming a priori depths. The HIMSELF catalogue is composed of the SIGMA historical parametric catalogue from 463 to 1965 and of the SiHex instrumental one from 1965 to 2009. All magnitudes are expressed in Mw which makes this catalogue directly usable as an input for seismic hazard studies, carried out both through a probabilistic or deterministic way. Uncertainties on magnitudes and depths are provided in this study for historical earthquakes following calculation scheme presented in Traversa et al. (submitted). Uncertainties on magnitudes for instrumental events are from Cara et al. (2016).
Royston, Patrick; Parmar, Mahesh K B
2014-08-07
Most randomized controlled trials with a time-to-event outcome are designed and analysed under the proportional hazards assumption, with a target hazard ratio for the treatment effect in mind. However, the hazards may be non-proportional. We address how to design a trial under such conditions, and how to analyse the results. We propose to extend the usual approach, a logrank test, to also include the Grambsch-Therneau test of proportional hazards. We test the resulting composite null hypothesis using a joint test for the hazard ratio and for time-dependent behaviour of the hazard ratio. We compute the power and sample size for the logrank test under proportional hazards, and from that we compute the power of the joint test. For the estimation of relevant quantities from the trial data, various models could be used; we advocate adopting a pre-specified flexible parametric survival model that supports time-dependent behaviour of the hazard ratio. We present the mathematics for calculating the power and sample size for the joint test. We illustrate the methodology in real data from two randomized trials, one in ovarian cancer and the other in treating cellulitis. We show selected estimates and their uncertainty derived from the advocated flexible parametric model. We demonstrate in a small simulation study that when a treatment effect either increases or decreases over time, the joint test can outperform the logrank test in the presence of both patterns of non-proportional hazards. Those designing and analysing trials in the era of non-proportional hazards need to acknowledge that a more complex type of treatment effect is becoming more common. Our method for the design of the trial retains the tools familiar in the standard methodology based on the logrank test, and extends it to incorporate a joint test of the null hypothesis with power against non-proportional hazards. For the analysis of trial data, we propose the use of a pre-specified flexible parametric model that can represent a time-dependent hazard ratio if one is present.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
NASA Astrophysics Data System (ADS)
Chou, Shuo-Ju
2011-12-01
In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision-makers with the ability to assess or measure the robustness of program requirements against such uncertainties. A literature review of techniques for forecasting technology performance and development uncertainties and subsequent impacts on capability, budget, and schedule requirements resulted in the conclusion that an analysis process that coupled a probabilistic analysis technique such as Monte Carlo Simulations with quantitative and parametric models of technology performance impact and technology development time and cost requirements would allow the probabilities of meeting specific constraints of these requirements to be established. These probabilities of requirements success metrics can then be used as a quantitative and probabilistic measure of program requirements robustness against technology uncertainties. Combined with a Multi-Objective Genetic Algorithm optimization process and computer-based Decision Support System, critical information regarding requirements robustness against technology uncertainties can be captured and quantified for acquisition decision-makers. This results in a more informed and justifiable selection of program technologies during initial program definition as well as formulation of program development and risk management strategies. To meet the stated research objective, the ENhanced TEchnology Robustness Prediction and RISk Evaluation (ENTERPRISE) methodology was formulated to provide a structured and transparent process for integrating these enabling techniques to provide a probabilistic and quantitative assessment of acquisition program requirements robustness against technology performance and development uncertainties. In order to demonstrate the capabilities of the ENTERPRISE method and test the research Hypotheses, an demonstration application of this method was performed on a notional program for acquiring the Carrier-based Suppression of Enemy Air Defenses (SEAD) using Unmanned Combat Aircraft Systems (UCAS) and their enabling technologies. The results of this implementation provided valuable insights regarding the benefits and inner workings of this methodology as well as its limitations that should be addressed in the future to narrow the gap between current state and the desired state.
A Semi-parametric Transformation Frailty Model for Semi-competing Risks Survival Data
Jiang, Fei; Haneuse, Sebastien
2016-01-01
In the analysis of semi-competing risks data interest lies in estimation and inference with respect to a so-called non-terminal event, the observation of which is subject to a terminal event. Multi-state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non-terminal and terminal events specified, in part, by a unit-specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi-competing risks analysis that permit the non-parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi-parametric efficient score under the complete data setting and propose a non-parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small-sample operating characteristics evaluated via simulation. Although the proposed semi-parametric transformation model and non-parametric score imputation method are motivated by the analysis of semi-competing risks data, they are broadly applicable to any analysis of multivariate time-to-event outcomes in which a unit-specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer. PMID:28439147
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION
NASA Technical Reports Server (NTRS)
Yam, Y.
1994-01-01
The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2015-01-01
The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.
QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices
NASA Technical Reports Server (NTRS)
Hess, R. A.; Henderson, D. K.
1996-01-01
A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Minsley, B.J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, 'best' model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequency-domain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favourably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment. ?? 2011. Geophysical Journal International ?? 2011 RAS.
NASA Astrophysics Data System (ADS)
Beutler, Florian; Saito, Shun; Seo, Hee-Jong; Brinkmann, Jon; Dawson, Kyle S.; Eisenstein, Daniel J.; Font-Ribera, Andreu; Ho, Shirley; McBride, Cameron K.; Montesano, Francesco; Percival, Will J.; Ross, Ashley J.; Ross, Nicholas P.; Samushia, Lado; Schlegel, David J.; Sánchez, Ariel G.; Tinker, Jeremy L.; Weaver, Benjamin A.
2014-09-01
We analyse the anisotropic clustering of the Baryon Oscillation Spectroscopic Survey (BOSS) CMASS Data Release 11 (DR11) sample, which consists of 690 827 galaxies in the redshift range 0.43 < z < 0.7 and has a sky coverage of 8498 deg2. We perform our analysis in Fourier space using a power spectrum estimator suggested by Yamamoto et al. We measure the multipole power spectra in a self-consistent manner for the first time in the sense that we provide a proper way to treat the survey window function and the integral constraint, without the commonly used assumption of an isotropic power spectrum and without the need to split the survey into subregions. The main cosmological signals exploited in our analysis are the baryon acoustic oscillations and the signal of redshift space distortions, both of which are distorted by the Alcock-Paczynski effect. Together, these signals allow us to constrain the distance ratio DV(zeff)/rs(zd) = 13.89 ± 0.18, the Alcock-Paczynski parameter FAP(zeff) = 0.679 ± 0.031 and the growth rate of structure f (zeff)σ8(zeff) = 0.419 ± 0.044 at the effective redshift zeff = 0.57. We emphasize that our constraints are robust against possible systematic uncertainties. In order to ensure this, we perform a detailed systematics study against CMASS mock galaxy catalogues and N-body simulations. We find that such systematics will lead to 3.1 per cent uncertainty for fσ8 if we limit our fitting range to k = 0.01-0.20 h Mpc-1, where the statistical uncertainty is expected to be three times larger. We did not find significant systematic uncertainties for DV/rs or FAP. Combining our data set with Planck to test General Relativity (GR) through the simple γ-parametrization, where the growth rate is given by f(z) = Ω ^{γ }_m(z), reveals a ˜2σ tension between the data and the prediction by GR. The tension between our result and GR can be traced back to a tension in the clustering amplitude σ8 between CMASS and Planck.
Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald
2017-06-01
The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Effect of monthly areal rainfall uncertainty on streamflow simulation
NASA Astrophysics Data System (ADS)
Ndiritu, J. G.; Mkhize, N.
2017-08-01
Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic monthly rainfalls were 86 and 90% of the mean naturalised streamflow. In calibration, 33% of the naturalised flow located within the streamflow ranges with historic rainfall simulations and using stochastic rainfalls increased this to 66%. In validation the respective percentages of naturalised flows located within the simulated streamflow ranges were 32 and 72% respectively. The analysis reveals that monthly areal rainfall uncertainty is significant and incorporating it into streamflow simulation would add validity to the results.
NASA Astrophysics Data System (ADS)
Islam, Siraj Ul; Déry, Stephen J.
2017-03-01
This study evaluates predictive uncertainties in the snow hydrology of the Fraser River Basin (FRB) of British Columbia (BC), Canada, using the Variable Infiltration Capacity (VIC) model forced with several high-resolution gridded climate datasets. These datasets include the Canadian Precipitation Analysis and the thin-plate smoothing splines (ANUSPLIN), North American Regional Reanalysis (NARR), University of Washington (UW) and Pacific Climate Impacts Consortium (PCIC) gridded products. Uncertainties are evaluated at different stages of the VIC implementation, starting with the driving datasets, optimization of model parameters, and model calibration during cool and warm phases of the Pacific Decadal Oscillation (PDO). The inter-comparison of the forcing datasets (precipitation and air temperature) and their VIC simulations (snow water equivalent - SWE - and runoff) reveals widespread differences over the FRB, especially in mountainous regions. The ANUSPLIN precipitation shows a considerable dry bias in the Rocky Mountains, whereas the NARR winter air temperature is 2 °C warmer than the other datasets over most of the FRB. In the VIC simulations, the elevation-dependent changes in the maximum SWE (maxSWE) are more prominent at higher elevations of the Rocky Mountains, where the PCIC-VIC simulation accumulates too much SWE and ANUSPLIN-VIC yields an underestimation. Additionally, at each elevation range, the day of maxSWE varies from 10 to 20 days between the VIC simulations. The snow melting season begins early in the NARR-VIC simulation, whereas the PCIC-VIC simulation delays the melting, indicating seasonal uncertainty in SWE simulations. When compared with the observed runoff for the Fraser River main stem at Hope, BC, the ANUSPLIN-VIC simulation shows considerable underestimation of runoff throughout the water year owing to reduced precipitation in the ANUSPLIN forcing dataset. The NARR-VIC simulation yields more winter and spring runoff and earlier decline of flows in summer due to a nearly 15-day earlier onset of the FRB springtime snowmelt. Analysis of the parametric uncertainty in the VIC calibration process shows that the choice of the initial parameter range plays a crucial role in defining the model hydrological response for the FRB. Furthermore, the VIC calibration process is biased toward cool and warm phases of the PDO and the choice of proper calibration and validation time periods is important for the experimental setup. Overall the VIC hydrological response is prominently influenced by the uncertainties involved in the forcing datasets rather than those in its parameter optimization and experimental setups.
Parametric analysis of ATM solar array.
NASA Technical Reports Server (NTRS)
Singh, B. K.; Adkisson, W. B.
1973-01-01
The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.
The parametric resonance—from LEGO Mindstorms to cold atoms
NASA Astrophysics Data System (ADS)
Kawalec, Tomasz; Sierant, Aleksandra
2017-07-01
We show an experimental setup based on a popular LEGO Mindstorms set, allowing us to both observe and investigate the parametric resonance phenomenon. The presented method is simple but covers a variety of student activities like embedded software development, conducting measurements, data collection and analysis. It may be used during science shows, as part of student projects and to illustrate the parametric resonance in mechanics or even quantum physics, during lectures or classes. The parametrically driven LEGO pendulum gains energy in a spectacular way, increasing its amplitude from 10° to about 100° within a few tens of seconds. We provide also a short description of a wireless absolute orientation sensor that may be used in quantitative analysis of driven or free pendulum movement.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters
NASA Astrophysics Data System (ADS)
Kim, T.; Kim, Y. S.
2017-12-01
The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.
2017-12-01
The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
The management of rivers for improving safety, shipping and environment requires conscious effort on the part of river managers. River engineers design hydraulic works to tackle various challenges, from increasing flow conveyance to ensuring minimal water depths for environmental flow and inland shipping. Last year saw the completion of such large scale river engineering in the 'Room for the River' programme for the Dutch Rhine River system, in which several dozen of human interventions were built to increase flood safety. Engineering works in rivers are not completed in isolation from society. Rather, their benefits - increased safety, landscaping beauty - and their disadvantages - expropriation, hindrance - directly affect inhabitants. Therefore river managers are required to carefully defend their plans. The effect of engineering works on river dynamics is being evaluated using hydraulic river models. Two-dimensional numerical models based on the shallow water equations provide the predictions necessary to make decisions on designs and future plans. However, like all environmental models, these predictions are subject to uncertainty. In recent years progress has been made in the identification of the main sources of uncertainty for hydraulic river models. Two of the most important sources are boundary conditions and hydraulic roughness (Warmink et al. 2013). The result of these sources of uncertainty is that the identification of single, deterministic prediction model is a non-trivial task. This is this is a well-understood problem in other fields as well - most notably hydrology - and known as equifinality. However, the particular case of human intervention modelling with hydraulic river models compounds the equifinality case. The model that provides the reference baseline situation is usually identified through calibration and afterwards modified for the engineering intervention. This results in two distinct models, the evaluation of which yields the effect of the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric
2017-06-01
The Dense Basis SED fitting method reveals previously inaccessible information about the number and duration of star formation episodes and the timing of stellar mass assembly as well as uncertainties in these quantities, in addition to accurately recovering traditional SED parameters including M*, SFR and dust attenuation. This is done using basis Star Formation Histories (SFHs) chosen by comparing the goodness-of-fit of mock galaxy SEDs to the goodness-of-reconstruction of their SFHs, trained and validated using three independent datasets of mock galaxies at z=1 from SAMs, Hydrodynamic simulations and stochastic realizations. Of the six parametrizations of SFHs considered, we reject the traditional parametrizations of constant and exponential SFHs and suggest four novel improvements, quantifying the bias and scatter of each parametrization. We then apply the method to a sample of 1100 CANDELS GOODS-S galaxies at 1
Bower, Hannah; Andersson, Therese M-L; Crowther, Michael J; Dickman, Paul W; Lambe, Mats; Lambert, Paul C
2018-04-01
Expected or reference mortality rates are commonly used in the calculation of measures such as relative survival in population-based cancer survival studies and standardized mortality ratios. These expected rates are usually presented according to age, sex, and calendar year. In certain situations, stratification of expected rates by other factors is required to avoid potential bias if interest lies in quantifying measures according to such factors as, for example, socioeconomic status. If data are not available on a population level, information from a control population could be used to adjust expected rates. We have presented two approaches for adjusting expected mortality rates using information from a control population: a Poisson generalized linear model and a flexible parametric survival model. We used a control group from BCBaSe-a register-based, matched breast cancer cohort in Sweden with diagnoses between 1992 and 2012-to illustrate the two methods using socioeconomic status as a risk factor of interest. Results showed that Poisson and flexible parametric survival approaches estimate similar adjusted mortality rates according to socioeconomic status. Additional uncertainty involved in the methods to estimate stratified, expected mortality rates described in this study can be accounted for using a parametric bootstrap, but this might make little difference if using a large control population.
Adaptive Control for Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Yang, Bong-Jun; Calise, Anthony J.; Craig, James I.; Whorton, Mark S.
2005-01-01
Most active vibration isolation systems that try to a provide quiescent acceleration environment for space science experiments have utilized linear design methods. In this paper, we address adaptive control augmentation of an existing classical controller that employs a high-gain acceleration feedback together with a low-gain position feedback to center the isolated platform. The control design feature includes parametric and dynamic uncertainties because the hardware of the isolation system is built as a payload-level isolator, and the acceleration Sensor exhibits a significant bias. A neural network is incorporated to adaptively compensate for the system uncertainties, and a high-pass filter is introduced to mitigate the effect of the measurement bias. Simulations show that the adaptive control improves the performance of the existing acceleration controller and keep the level of the isolated platform deviation to that of the existing control system.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Power flow analysis of two coupled plates with arbitrary characteristics
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
In the last progress report (Feb. 1988) some results were presented for a parametric analysis on the vibrational power flow between two coupled plate structures using the mobility power flow approach. The results reported then were for changes in the structural parameters of the two plates, but with the two plates identical in their structural characteristics. Herein, limitation is removed. The vibrational power input and output are evaluated for different values of the structural damping loss factor for the source and receiver plates. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. The results obtained from the mobility power flow approach are compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between the SEA results and the mobility power flow results. Furthermore, the benefits derived from using the mobility power flow approach are examined.
A Strategy for a Parametric Flood Insurance Using Proxies
NASA Astrophysics Data System (ADS)
Haraguchi, M.; Lall, U.
2017-12-01
Traditionally, the design of flood control infrastructure and flood plain zoning require the estimation of return periods, which have been calculated by river hydraulic models with rainfall-runoff models. However, this multi-step modeling process leads to significant uncertainty to assess inundation. In addition, land use change and changing climate alter the potential losses, as well as make the modeling results obsolete. For these reasons, there is a strong need to create parametric indexes for the financial risk transfer for large flood events, to enable rapid response and recovery. Hence, this study examines the possibility of developing a parametric flood index at the national or regional level in Asia, which can be quickly mobilized after catastrophic floods. Specifically, we compare a single trigger based on rainfall index with multiple triggers using rainfall and streamflow indices by conducting case studies in Bangladesh and Thailand. The proposed methodology is 1) selecting suitable indices of rainfall and streamflow (if available), 2) identifying trigger levels for specified return periods for losses using stepwise and logistic regressions, 3) measuring the performance of indices, and 4) deriving return periods of selected windows and trigger levels. Based on the methodology, actual trigger levels were identified for Bangladesh and Thailand. Models based on multiple triggers reduced basis risks, an inherent problem in an index insurance. The proposed parametric flood index can be applied to countries with similar geographic and meteorological characteristics, and serve as a promising method for ex-ante risk financing for developing countries. This work is intended to be a preliminary work supporting future work on pricing risk transfer mechanisms in ex-ante risk finance.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Pant, Sanjay; Lombardi, Damiano
2015-10-01
A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter
2009-06-21
Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design and Analysis of Morpheus Lander Flight Control System
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Yang, Lee; Fritz, Mathew; Nguyen, Louis H.; Johnson, Wyatt R.; Hart, Jeremy J.
2014-01-01
The Morpheus Lander is a vertical takeoff and landing test bed vehicle developed to demonstrate the system performance of the Guidance, Navigation and Control (GN&C) system capability for the integrated autonomous landing and hazard avoidance system hardware and software. The Morpheus flight control system design must be robust to various mission profiles. This paper presents a design methodology for employing numerical optimization to develop the Morpheus flight control system. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics and propellant slosh. Under the assumption that the Morpheus time-varying dynamics and control system can be frozen over a short period of time, the flight controllers are designed to stabilize all selected frozen-time control systems in the presence of parametric uncertainty. Both control gains in the inner attitude control loop and guidance gains in the outer position control loop are designed to maximize the vehicle performance while ensuring robustness. The flight control system designs provided herein have been demonstrated to provide stable control systems in both Draper Ares Stability Analysis Tool (ASAT) and the NASA/JSC Trick-based Morpheus time domain simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corte, Frans de; Vandenberghe, Dimitri; Wispelaere, Antoine de
In luminescence dating of sediments, one of the most interesting tools for the determination of the annual radiation dose is Ge {gamma}-ray spectrometry. Indeed, it yields information on both the content of the radioelements K, Th, and U, and on the occurrence - in geological times - of disequilibria in the Th and U decay series. In the present work, two methodological variants of the {gamma}-spectrometric analysis were tested, which largely depend on the quality of the nuclear decay data involved: (1) a parametric calibration of the sediment measurements, and (2) the correction for the heavy spectral interference of themore » 226Ra 186.2 keV peak by 235U at 185.7 keV. The performance of these methods was examined via the analysis of three Certified Reference Materials, with the introduction of {gamma}-ray intensity data originating from ENSDF. Relevant conclusions were drawn as to the accuracy of the data and their uncertainties quoted.« less
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Winfield, Jessica M.; Payne, Geoffrey S.; Weller, Alex; deSouza, Nandita M.
2016-01-01
Abstract Multi-parametric magnetic resonance imaging (mpMRI) offers a unique insight into tumor biology by combining functional MRI techniques that inform on cellularity (diffusion-weighted MRI), vascular properties (dynamic contrast-enhanced MRI), and metabolites (magnetic resonance spectroscopy) and has scope to provide valuable information for prognostication and response assessment. Challenges in the application of mpMRI in the clinic include the technical considerations in acquiring good quality functional MRI data, development of robust techniques for analysis, and clinical interpretation of the results. This article summarizes the technical challenges in acquisition and analysis of multi-parametric MRI data before reviewing the key applications of multi-parametric MRI in clinical research and practice. PMID:27748710
A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame
NASA Astrophysics Data System (ADS)
Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.
2015-01-01
Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public release, LA-UR-13-27561).
Incentive Control Strategies for Decision Problems with Parametric Uncertainties
NASA Astrophysics Data System (ADS)
Cansever, Derya H.
The central theme of this thesis is the design of incentive control policies in large scale systems with hierarchical decision structures, under the stipulation that the objective functionals of the agents at the lower level of the hierarchy are uncertain to the top-level controller (the leader). These uncertainties are modeled as a finite -dimensional parameter vector whose exact value constitutes private information to the relevant agent at the lower level. The approach we have adopted is to design incentive policies for the leader such that the dependence of the decision of the agents on the uncertain parameter is minimized. We have identified several classes of problems for which this approach is feasible. In particular, we have constructed policies whose performance is arbitrarily close to the solution of a version of the same problem that does not involve uncertainties. We have also shown that for a certain class of problem wherein the leader observes a linear combination of the agents' decisions, the leader can achieve the performance he would obtain if he had observed each decision separately.
Advanced Control Synthesis for Reverse Osmosis Water Desalination Processes.
Phuc, Bui Duc Hong; You, Sam-Sang; Choi, Hyeung-Six; Jeong, Seok-Kwon
2017-11-01
In this study, robust control synthesis has been applied to a reverse osmosis desalination plant whose product water flow and salinity are chosen as two controlled variables. The reverse osmosis process has been selected to study since it typically uses less energy than thermal distillation. The aim of the robust design is to overcome the limitation of classical controllers in dealing with large parametric uncertainties, external disturbances, sensor noises, and unmodeled process dynamics. The analyzed desalination process is modeled as a multi-input multi-output (MIMO) system with varying parameters. The control system is decoupled using a feed forward decoupling method to reduce the interactions between control channels. Both nominal and perturbed reverse osmosis systems have been analyzed using structured singular values for their stabilities and performances. Simulation results show that the system responses meet all the control requirements against various uncertainties. Finally the reduced order controller provides excellent robust performance, with achieving decoupling, disturbance attenuation, and noise rejection. It can help to reduce the membrane cleanings, increase the robustness against uncertainties, and lower the energy consumption for process monitoring.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
NASA Astrophysics Data System (ADS)
Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.
2017-12-01
In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.
A Parametric Analysis of HELSTAR
1983-12-01
AFIT/GSO/OS/83D-7 S....A PARAMETRIC ANALYSIS OF HELSTAR THESIS James Miklasevich Captain, USAF AFIT/CSO/OS/83D-7 ’- 3 - Reproduced From J 04. • ’ S...1 Statement of Problem. ...... ................ ......... 3 Objectives of the Research. .... ............ . . . 3 ...Launch Scenarios ................. 39 Launch Sequencel................... 39 Launch Sequence 2 . . . . . .. . . . .. . . . . . 1 Launch Sequence 3
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
ERIC Educational Resources Information Center
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…
A determination of the charm content of the proton: The NNPDF Collaboration.
Ball, Richard D; Bertone, Valerio; Bonvini, Marco; Carrazza, Stefano; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P; Rojo, Juan; Rottoli, Luca
2016-01-01
We present an unbiased determination of the charm content of the proton, in which the charm parton distribution function (PDF) is parametrized on the same footing as the light quarks and the gluon in a global PDF analysis. This determination relies on the NLO calculation of deep-inelastic structure functions in the FONLL scheme, generalized to account for massive charm-initiated contributions. When the EMC charm structure function dataset is included, it is well described by the fit, and PDF uncertainties in the fitted charm PDF are significantly reduced. We then find that the fitted charm PDF vanishes within uncertainties at a scale [Formula: see text] GeV for all [Formula: see text], independent of the value of [Formula: see text] used in the coefficient functions. We also find some evidence that the charm PDF at large [Formula: see text] and low scales does not vanish, but rather has an "intrinsic" component, very weakly scale dependent and almost independent of the value of [Formula: see text], carrying less than [Formula: see text] of the total momentum of the proton. The uncertainties in all other PDFs are only slightly increased by the inclusion of fitted charm, while the dependence of these PDFs on [Formula: see text] is reduced. The increased stability with respect to [Formula: see text] persists at high scales and is the main implication of our results for LHC phenomenology. Our results show that if the EMC data are correct, then the usual approach in which charm is perturbatively generated leads to biased results for the charm PDF, though at small x this bias could be reabsorbed if the uncertainty due to the charm mass and missing higher orders were included. We show that LHC data for processes, such as high [Formula: see text] and large rapidity charm pair production and [Formula: see text] production, have the potential to confirm or disprove the implications of the EMC data.
Host Model Uncertainty in Aerosol Radiative Forcing Estimates - The AeroCom Prescribed Experiment
NASA Astrophysics Data System (ADS)
Stier, P.; Kinne, S.; Bellouin, N.; Myhre, G.; Takemura, T.; Yu, H.; Randles, C.; Chung, C. E.
2012-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. However, even for the case of identical aerosol emissions, the simulated direct aerosol radiative forcings show significant diversity among the AeroCom models (Schulz et al., 2006). Our analysis of aerosol absorption in the AeroCom models indicates a larger diversity in the translation from given aerosol radiative properties (absorption optical depth) to actual atmospheric absorption than in the translation of a given atmospheric burden of black carbon to the radiative properties (absorption optical depth). The large diversity is caused by differences in the simulated cloud fields, radiative transfer, the relative vertical distribution of aerosols and clouds, and the effective surface albedo. This indicates that differences in host model (GCM or CTM hosting the aerosol module) parameterizations contribute significantly to the simulated diversity of aerosol radiative forcing. The magnitude of these host model effects in global aerosol model and satellites retrieved aerosol radiative forcing estimates cannot be estimated from the diagnostics of the "standard" AeroCom forcing experiments. To quantify the contribution of differences in the host models to the simulated aerosol radiative forcing and absorption we conduct the AeroCom Prescribed experiment, a simple aerosol model and satellite retrieval intercomparison with prescribed highly idealised aerosol fields. Quality checks, such as diagnostic output of the 3D aerosol fields as implemented in each model, ensure the comparability of the aerosol implementation in the participating models. The simulated forcing variability among the models and retrievals is a direct measure of the contribution of host model assumptions to the uncertainty in the assessment of the aerosol radiative effects. We will present the results from the AeroCom prescribed experiment with focus on the attribution to the simulated variability to parametric and structural model uncertainties. This work will help to prioritise areas for future model improvements and ultimately lead to uncertainty reduction.
Parametrically excited non-linear multidegree-of-freedom systems with repeated natural frequencies
NASA Astrophysics Data System (ADS)
Tezak, E. G.; Nayfeh, A. H.; Mook, D. T.
1982-12-01
A method for analyzing multidegree-of-freedom systems having a repeated natural frequency subjected to a parametric excitation is presented. Attention is given to the ordering of the various terms (linear and non-linear) in the governing equations. The analysis is based on the method of multiple scales. As a numerical example involving a parametric resonance, panel flutter is discussed in detail in order to illustrate the type of results one can expect to obtain with this analysis. Some of the analytical results are verified by a numerical integration of the governing equations.
Equation of state and QCD transition at finite temperature
NASA Astrophysics Data System (ADS)
Bazavov, A.; Bhattacharya, T.; Cheng, M.; Christ, N. H.; Detar, C.; Ejiri, S.; Gottlieb, Steven; Gupta, R.; Heller, U. M.; Huebner, K.; Jung, C.; Karsch, F.; Laermann, E.; Levkova, L.; Miao, C.; Mawhinney, R. D.; Petreczky, P.; Schmidt, C.; Soltz, R. A.; Soeldner, W.; Sugar, R.; Toussaint, D.; Vranas, P.
2009-07-01
We calculate the equation of state in 2+1 flavor QCD at finite temperature with physical strange quark mass and almost physical light quark masses using lattices with temporal extent Nτ=8. Calculations have been performed with two different improved staggered fermion actions, the asqtad and p4 actions. Overall, we find good agreement between results obtained with these two O(a2) improved staggered fermion discretization schemes. A comparison with earlier calculations on coarser lattices is performed to quantify systematic errors in current studies of the equation of state. We also present results for observables that are sensitive to deconfining and chiral aspects of the QCD transition on Nτ=6 and 8 lattices. We find that deconfinement and chiral symmetry restoration happen in the same narrow temperature interval. In an appendix we present a simple parametrization of the equation of state that can easily be used in hydrodynamic model calculations. In this parametrization we include an estimate of current uncertainties in the lattice calculations which arise from cutoff and quark mass effects.
Light Absorption Enhancement of Black Carbon Aerosol Constrained by Particle Morphology.
Wu, Yu; Cheng, Tianhai; Liu, Dantong; Allan, James D; Zheng, Lijuan; Chen, Hao
2018-06-19
The radiative forcing of black carbon aerosol (BC) is one of the largest sources of uncertainty in climate change assessments. Contrasting results of BC absorption enhancement ( E abs ) after aging are estimated by field measurements and modeling studies, causing ambiguous parametrizations of BC solar absorption in climate models. Here we quantify E abs using a theoretical model parametrized by the complex particle morphology of BC in different aging scales. We show that E abs continuously increases with aging and stabilizes with a maximum of ∼3.5, suggesting that previous seemingly contrast results of E abs can be explicitly described by BC aging with corresponding particle morphology. We also report that current climate models using Mie Core-Shell model may overestimate E abs at a certain aging stage with a rapid rise of E abs , which is commonly observed in the ambient. A correction coefficient for this overestimation is suggested to improve model predictions of BC climate impact.
NASA Astrophysics Data System (ADS)
Lall, U.; Allaire, M.; Ceccato, P.; Haraguchi, M.; Cian, F.; Bavandi, A.
2017-12-01
Catastrophic floods can pose a significant challenge for response and recovery. A key bottleneck in the speed of response is the availability of funds to a country or regions finance ministry to mobilize resources. Parametric instruments, where the release of funs is tied to the exceedance of a specified index or threshold, rather than to loss verification are well suited for this purpose. However, designing and appropriate index, that is not subject to manipulation and accurately reflects the need is a challenge, especially in developing countries which have short hydroclimatic and loss records, and where rapid land use change has led to significant changes in exposure and hydrology over time. The use of long records of rainfall from climate re-analyses, flooded area and land use from remote sensing to design and benchmark a parametric index considering the uncertainty and representativeness of potential loss is explored with applications to Bangladesh and Thailand. Prospects for broader applicability and limitations are discussed.
NASA Astrophysics Data System (ADS)
Qattan, I. A.; Homouz, D.; Riahi, M. K.
2018-04-01
In this work, we improve on and extend to low- and high-Q2 values the extractions of the two-photon-exchange (TPE) amplitudes and the ratio Pl/PlBorn(ɛ ,Q2) using world data on electron-proton elastic scattering cross section σR(ɛ ,Q2) with an emphasis on data covering the high-momentum region, up to Q2=5.20 (GeV/c ) 2 , to better constrain the TPE amplitudes in this region. We provide a new parametrization of the TPE amplitudes, along with an estimate of the fit uncertainties. We compare the results to several previous phenomenological extractions and hadronic TPE predictions. We use the new parametrization of the TPE amplitudes to extract the ratio Pl/PlBorn(ɛ ,Q2) , and then compare the results to previous extractions, several theoretical calculations, and direct measurements at Q2=2.50 (GeV/c ) 2 .
DNN-state identification of 2D distributed parameter systems
NASA Astrophysics Data System (ADS)
Chairez, I.; Fuentes, R.; Poznyak, A.; Poznyak, T.; Escudero, M.; Viana, L.
2012-02-01
There are many examples in science and engineering which are reduced to a set of partial differential equations (PDEs) through a process of mathematical modelling. Nevertheless there exist many sources of uncertainties around the aforementioned mathematical representation. Moreover, to find exact solutions of those PDEs is not a trivial task especially if the PDE is described in two or more dimensions. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set to an arbitrary accuracy. In this article, a strategy based on the differential neural network (DNN) for the non-parametric identification of a mathematical model described by a class of two-dimensional (2D) PDEs is proposed. The adaptive laws for weights ensure the 'practical stability' of the DNN-trajectories to the parabolic 2D-PDE states. To verify the qualitative behaviour of the suggested methodology, here a non-parametric modelling problem for a distributed parameter plant is analysed.
NASA Astrophysics Data System (ADS)
Fischbach, J. R.; Johnson, D.
2017-12-01
Louisiana's Comprehensive Master Plan for a Sustainable Coast is a 50-year plan designed to reduce flood risk and minimize land loss while allowing for the continued provision of economic and ecosystem services from this critical coastal region. Conceived in 2007 in response to hurricanes Katrina and Rita in 2005, the master plan is updated on a five-year planning cycle by the state's Coastal Protection and Restoration Authority (CPRA). Under the plan's middle-of-the-road (Medium) environmental scenario, the master plan is projected to reduce expected annual damage from storm surge flooding by approximately 65% relative to a future without action: from 5.3 billion to 2.2 billion in 2040, and from 12.1 billion to 3.7 billion in 2065. The Coastal Louisiana Risk Assessment model (CLARA) is used to estimate the risk reduction impacts of projects that have been considered for implementation as part of the plan. Evaluation of projects involves estimation of cost effectiveness in multiple future time periods and under a range of environmental uncertainties (e.g., the rates of sea level rise and land subsidence, changes in future hurricane intensity and frequency), operational uncertainties (e.g., system fragility), and economic uncertainties (e.g., patterns of population change and asset exposure). Between the 2012 and 2017 planning cycles, many improvements were made to the CLARA model. These included changes to the model's spatial resolution and definition of policy-relevant spatial units, an improved treatment of parametric uncertainty and uncertainty propagation between model components, the addition of a module to consider critical infrastructure exposure, and a new population growth model. CPRA also developed new scenarios for analysis in 2017 that were responsive to new scientific literature and to accommodate a new approach to modeling coastal morphology. In this talk, we discuss how CLARA has evolved over the 2012 and 2017 planning cycles in response to the needs of policy makers and CPRA managers. While changes will be illustrated through examples from Louisiana's 2017 Coastal Master Plan, we endeavor to provide generalizable and actionable insights about how modeling choices should be guided by the decision support process being used by planners.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
Communicating uncertainties in earth sciences in view of user needs
NASA Astrophysics Data System (ADS)
de Vries, Wim; Kros, Hans; Heuvelink, Gerard
2014-05-01
Uncertainties are inevitable in all results obtained in the earth sciences, regardless whether these are based on field observations, experimental research or predictive modelling. When informing decision and policy makers or stakeholders, it is important that these uncertainties are also communicated. In communicating results, it important to apply a "Progressive Disclosure of Information (PDI)" from non-technical information through more specialised information, according to the user needs. Generalized information is generally directed towards non-scientific audiences and intended for policy advice. Decision makers have to be aware of the implications of the uncertainty associated with results, so that they can account for it in their decisions. Detailed information on the uncertainties is generally intended for scientific audiences to give insight in underlying approaches and results. When communicating uncertainties, it is important to distinguish between scientific results that allow presentation in terms of probabilistic measures of uncertainty and more intrinsic uncertainties and errors that cannot be expressed in mathematical terms. Examples of earth science research that allow probabilistic measures of uncertainty, involving sophisticated statistical methods, are uncertainties in spatial and/or temporal variations in results of: • Observations, such as soil properties measured at sampling locations. In this case, the interpolation uncertainty, caused by a lack of data collected in space, can be quantified by e.g. kriging standard deviation maps or animations of conditional simulations. • Experimental measurements, comparing impacts of treatments at different sites and/or under different conditions. In this case, an indication of the average and range in measured responses to treatments can be obtained from a meta-analysis, summarizing experimental findings between replicates and across studies, sites, ecosystems, etc. • Model predictions due to uncertain model parameters (parametric variability). These uncertainties can be quantified by uncertainty propagation methods such as Monte Carlo simulation methods. Examples of intrinsic uncertainties that generally cannot be expressed in mathematical terms are errors or biases in: • Results of experiments and observations due to inadequate sampling and errors in analyzing data in the laboratory and even in data reporting. • Results of (laboratory) experiments that are limited to a specific domain or performed under circumstances that differ from field circumstances. • Model structure, due to lack of knowledge of the underlying processes. Structural uncertainty, which may cause model inadequacy/ bias, is inherent in model approaches since models are approximations of reality. Intrinsic uncertainties often occur in an emerging field where ongoing new findings, either experiments or field observations of new model findings, challenge earlier work. In this context, climate scientists working within the IPCC have adopted a lexicon to communicate confidence in their findings, ranging from "very high", "high", "medium", "low" and "very low" confidence. In fact, there are also statistical methods to gain insight in uncertainties in model predictions due to model assumptions (i.e. model structural error). Examples are comparing model results with independent observations or a systematic intercomparison of predictions from multiple models. In the latter case, Bayesian model averaging techniques can be used, in which each model considered gets an assigned prior probability of being the 'true' model. This approach works well with statistical (regression) models, but extension to physically-based models is cumbersome. An alternative is the use of state-space models in which structural errors are represent as (additive) noise terms. In this presentation, we focus on approaches that are relevant at the science - policy interface, including multiple scientific disciplines and policy makers with different subject areas. Approaches to communicate uncertainties in results of observations or model predictions are discussed, distinguishing results that include probabilistic measures of uncertainty and more intrinsic uncertainties. Examples concentrate on uncertainties in nitrogen (N) related environmental issues, including: • Spatio-temporal trends in atmospheric N deposition, in view of the policy question whether there is a declining or increasing trend. • Carbon response to N inputs to terrestrial ecosystems, based on meta-analysis of N addition experiments and other approaches, in view of the policy relevance of N emission control. • Calculated spatial variations in the emissions of nitrous-oxide and ammonia, in view of the need of emission policies at different spatial scales. • Calculated N emissions and losses by model intercomparisons, in view of the policy need to apply no-regret decisions with respect to the control of those emissions.
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
Exploring prediction uncertainty of spatial data in geostatistical and machine learning Approaches
NASA Astrophysics Data System (ADS)
Klump, J. F.; Fouedjio, F.
2017-12-01
Geostatistical methods such as kriging with external drift as well as machine learning techniques such as quantile regression forest have been intensively used for modelling spatial data. In addition to providing predictions for target variables, both approaches are able to deliver a quantification of the uncertainty associated with the prediction at a target location. Geostatistical approaches are, by essence, adequate for providing such prediction uncertainties and their behaviour is well understood. However, they often require significant data pre-processing and rely on assumptions that are rarely met in practice. Machine learning algorithms such as random forest regression, on the other hand, require less data pre-processing and are non-parametric. This makes the application of machine learning algorithms to geostatistical problems an attractive proposition. The objective of this study is to compare kriging with external drift and quantile regression forest with respect to their ability to deliver reliable prediction uncertainties of spatial data. In our comparison we use both simulated and real world datasets. Apart from classical performance indicators, comparisons make use of accuracy plots, probability interval width plots, and the visual examinations of the uncertainty maps provided by the two approaches. By comparing random forest regression to kriging we found that both methods produced comparable maps of estimated values for our variables of interest. However, the measure of uncertainty provided by random forest seems to be quite different to the measure of uncertainty provided by kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. These preliminary results raise questions about assessing the risks associated with decisions based on the predictions from geostatistical and machine learning algorithms in a spatial context, e.g. mineral exploration.
Shock Layer Radiation Modeling and Uncertainty for Mars Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth
2012-01-01
A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the radiative heating for the Mars Pathfinder probe is predicted to be nearly 20 W/cm2. In contrast to previous studies, this value is shown to be significant relative to the convective heating.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Type-2 fuzzy logic control of a 2-DOF helicopter (TRMS system)
NASA Astrophysics Data System (ADS)
Zeghlache, Samir; Kara, Kamel; Saigaa, Djamel
2014-09-01
The helicopter dynamic includes nonlinearities, parametric uncertainties and is subject to unknown external disturbances. Such complicated dynamics involve designing sophisticated control algorithms that can deal with these difficulties. In this paper, a type 2 fuzzy logic PID controller is proposed for TRMS (twin rotor mimo system) control problem. Using triangular membership functions and based on a human operator experience, two controllers are designed to control the position of the yaw and the pitch angles of the TRMS. Simulation results are given to illustrate the effectiveness of the proposed control scheme.
Control of nonlinear systems using terminal sliding modes
NASA Technical Reports Server (NTRS)
Venkataraman, S. T.; Gulati, S.
1992-01-01
The development of an approach to control synthesis for robust robot operations in unstructured environments is discussed. To enhance control performance with full model information, the authors introduce the notion of terminal convergence and develop control laws based on a class of sliding modes, denoted as terminal sliders. They demonstrate that terminal sliders provide robustness to parametric uncertainty without having to resort to high-frequency control switching, as in the case of conventional sliders. It is shown that the proposed method leads to greater guaranteed precision in all control cases discussed.
NASA Astrophysics Data System (ADS)
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Autonomous Pointing Control of a Large Satellite Antenna Subject to Parametric Uncertainty
Wu, Shunan; Liu, Yufei; Radice, Gianmarco; Tan, Shujun
2017-01-01
With the development of satellite mobile communications, large antennas are now widely used. The precise pointing of the antenna’s optical axis is essential for many space missions. This paper addresses the challenging problem of high-precision autonomous pointing control of a large satellite antenna. The pointing dynamics are firstly proposed. The proportional–derivative feedback and structural filter to perform pointing maneuvers and suppress antenna vibrations are then presented. An adaptive controller to estimate actual system frequencies in the presence of modal parameters uncertainty is proposed. In order to reduce periodic errors, the modified controllers, which include the proposed adaptive controller and an active disturbance rejection filter, are then developed. The system stability and robustness are analyzed and discussed in the frequency domain. Numerical results are finally provided, and the results have demonstrated that the proposed controllers have good autonomy and robustness. PMID:28287450
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation
NASA Astrophysics Data System (ADS)
Guillas, S.; Day, S. J.; Joakim, B.
2016-12-01
We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.
Parametric Analysis of Light Truck and Automobile Maintenance
DOT National Transportation Integrated Search
1979-05-01
Utilizing the Automotive and Light Truck Service and Repair Data Base developed in the campanion report, parametric analyses were made of the relationships between maintenance costs, schduled and unschduled, and vehicle parameters; body class, manufa...
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
Combined non-parametric and parametric approach for identification of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz
2018-03-01
Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.
Nonparametric predictive inference for combining diagnostic tests with parametric copula
NASA Astrophysics Data System (ADS)
Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.
2017-09-01
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Four photon parametric amplification. [in unbiased Josephson junction
NASA Technical Reports Server (NTRS)
Parrish, P. T.; Feldman, M. J.; Ohta, H.; Chiao, R. Y.
1974-01-01
An analysis is presented describing four-photon parametric amplification in an unbiased Josephson junction. Central to the theory is the model of the Josephson effect as a nonlinear inductance. Linear, small signal analysis is applied to the two-fluid model of the Josephson junction. The gain, gain-bandwidth product, high frequency limit, and effective noise temperature are calculated for a cavity reflection amplifier. The analysis is extended to multiple (series-connected) junctions and subharmonic pumping.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Adaptive integral robust control and application to electromechanical servo systems.
Deng, Wenxiang; Yao, Jianyong
2017-03-01
This paper proposes a continuous adaptive integral robust control with robust integral of the sign of the error (RISE) feedback for a class of uncertain nonlinear systems, in which the RISE feedback gain is adapted online to ensure the robustness against disturbances without the prior bound knowledge of the additive disturbances. In addition, an adaptive compensation integrated with the proposed adaptive RISE feedback term is also constructed to further reduce design conservatism when the system also exists parametric uncertainties. Lyapunov analysis reveals the proposed controllers could guarantee the tracking errors are asymptotically converging to zero with continuous control efforts. To illustrate the high performance nature of the developed controllers, numerical simulations are provided. At the end, an application case of an actual electromechanical servo system driven by motor is also studied, with some specific design consideration, and comparative experimental results are obtained to verify the effectiveness of the proposed controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Preliminary Investigation on the Behavior of Pore Air Pressure During Rainfall Infiltration
NASA Astrophysics Data System (ADS)
Ashraf Mohamad Ismail, Mohd; Min, Ng Soon; Hasliza Hamzah, Nur; Hazreek Zainal Abidin, Mohd; Madun, Aziman; Tajudin, Saiful Azhar Ahmad
2018-04-01
This paper focused on the preliminary investigation of pore air pressure behaviour during rainfall infiltration in order to substantiate the mechanism of rainfall induced slope failure. The actual behaviour or pore air pressure during infiltration is yet to be clearly understood as it is regularly assumed as atmospheric. Numerical modelling of one dimensional (1D) soil column was utilized in this study to provide a preliminary insight of this highlighted uncertainty. Parametric study was performed by using rainfall intensities of 1.85 x 10-3m/s and 1.16 x 10-4m/s applied on glass beads to simulate intense and modest rainfall conditions. Analysis results show that the high rainfall intensity causes more development of pore air pressure compared to low rainfall intensity. This is because at high rainfall intensity, the rainwater cannot replace the pore air smoothly thus confining the pore air. Therefore, the effect of pore air pressure has to be taken into consideration particularly during heavy rainfall.
Wang, Shuo; Yu, Rongjun; Tyszka, J. Michael; Zhen, Shanshan; Kovach, Christopher; Sun, Sai; Huang, Yi; Hurlemann, Rene; Ross, Ian B.; Chung, Jeffrey M.; Mamelak, Adam N.; Adolphs, Ralph; Rutishauser, Ueli
2017-01-01
The human amygdala is a key structure for processing emotional facial expressions, but it remains unclear what aspects of emotion are processed. We investigated this question with three different approaches: behavioural analysis of 3 amygdala lesion patients, neuroimaging of 19 healthy adults, and single-neuron recordings in 9 neurosurgical patients. The lesion patients showed a shift in behavioural sensitivity to fear, and amygdala BOLD responses were modulated by both fear and emotion ambiguity (the uncertainty that a facial expression is categorized as fearful or happy). We found two populations of neurons, one whose response correlated with increasing degree of fear, or happiness, and a second whose response primarily decreased as a linear function of emotion ambiguity. Together, our results indicate that the human amygdala processes both the degree of emotion in facial expressions and the categorical ambiguity of the emotion shown and that these two aspects of amygdala processing can be most clearly distinguished at the level of single neurons. PMID:28429707
Progress in and prospects for fluvial flood modelling.
Wheater, H S
2002-07-15
Recent floods in the UK have raised public and political awareness of flood risk. There is an increasing recognition that flood management and land-use planning are linked, and that decision-support modelling tools are required to address issues of climate and land-use change for integrated catchment management. In this paper, the scientific context for fluvial flood modelling is discussed, current modelling capability is considered and research challenges are identified. Priorities include (i) appropriate representation of spatial precipitation, including scenarios of climate change; (ii) development of a national capability for continuous hydrological simulation of ungauged catchments; (iii) improved scientific understanding of impacts of agricultural land-use and land-management change, and the development of new modelling approaches to represent those impacts; (iv) improved representation of urban flooding, at both local and catchment scale; (v) appropriate parametrizations for hydraulic simulation of in-channel and flood-plain flows, assimilating available ground observations and remotely sensed data; and (vi) a flexible decision-support modelling framework, incorporating developments in computing, data availability, data assimilation and uncertainty analysis.
NASA Astrophysics Data System (ADS)
Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun
2018-03-01
This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.
Dual adaptive control: Design principles and applications
NASA Technical Reports Server (NTRS)
Mookerjee, Purusottam
1988-01-01
The design of an actively adaptive dual controller based on an approximation of the stochastic dynamic programming equation for a multi-step horizon is presented. A dual controller that can enhance identification of the system while controlling it at the same time is derived for multi-dimensional problems. This dual controller uses sensitivity functions of the expected future cost with respect to the parameter uncertainties. A passively adaptive cautious controller and the actively adaptive dual controller are examined. In many instances, the cautious controller is seen to turn off while the latter avoids the turn-off of the control and the slow convergence of the parameter estimates, characteristic of the cautious controller. The algorithms have been applied to a multi-variable static model which represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. Monte Carlo comparisons based on parametric and nonparametric statistical analysis indicate the superiority of the dual controller over the baseline controller.
NASA Technical Reports Server (NTRS)
1973-01-01
Parametric studies and subsystem comparisons for the orbital radar mapping mission to planet Venus are presented. Launch vehicle requirements and primary orbiter propulsion system requirements are evaluated. The systems parametric analysis indicated that orbit size and orientation interrelated with almost all of the principal spacecraft systems and influenced significantly the definition of orbit insertion propulsion requirements, weight in orbit capability, radar system design, and mapping strategy.
Parametric Analysis and Safety Concepts of CWR Track Buckling.
DOT National Transportation Integrated Search
1993-12-01
The report presents a comprehensive study of continuous welded rail (CWR) track buckling strength as influenced by the range of all key parameters such as the lateral, torsional and longitudinal resistance, vehicle loads, etc. The parametric study pr...
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
NASA Astrophysics Data System (ADS)
Garner, G. G.; Reed, P. M.; Keller, K.
2014-12-01
Integrated assessment models (IAMs) are often used with the intent to aid in climate change decisionmaking. Numerous studies have analyzed the effects of parametric and/or structural uncertainties in IAMs, but uncertainties regarding the problem formulation are often overlooked. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the problem formulation. The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decisionmakers, however, may be concerned with a broader range of values and preferences that are not captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing both (ii) the costs of abatement and (iii) the damages due to climate change. We derive a set of Pareto-optimal solutions over which decisionmakers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pouillot, Régis; Delignette-Muller, Marie Laure
2010-09-01
Quantitative risk assessment has emerged as a valuable tool to enhance the scientific basis of regulatory decisions in the food safety domain. This article introduces the use of two new computing resources (R packages) specifically developed to help risk assessors in their projects. The first package, "fitdistrplus", gathers tools for choosing and fitting a parametric univariate distribution to a given dataset. The data may be continuous or discrete. Continuous data may be right-, left- or interval-censored as is frequently obtained with analytical methods, with the possibility of various censoring thresholds within the dataset. Bootstrap procedures then allow the assessor to evaluate and model the uncertainty around the parameters and to transfer this information into a quantitative risk assessment model. The second package, "mc2d", helps to build and study two dimensional (or second-order) Monte-Carlo simulations in which the estimation of variability and uncertainty in the risk estimates is separated. This package easily allows the transfer of separated variability and uncertainty along a chain of conditional mathematical and probabilistic models. The usefulness of these packages is illustrated through a risk assessment of hemolytic and uremic syndrome in children linked to the presence of Escherichia coli O157:H7 in ground beef. These R packages are freely available at the Comprehensive R Archive Network (cran.r-project.org). Copyright 2010 Elsevier B.V. All rights reserved.
Estimate of B(B¯→Xsγ) at O(αs2)
NASA Astrophysics Data System (ADS)
Misiak, M.; Asatrian, H. M.; Bieri, K.; Czakon, M.; Czarnecki, A.; Ewerth, T.; Ferroglia, A.; Gambino, P.; Gorbahn, M.; Greub, C.; Haisch, U.; Hovhannisyan, A.; Hurth, T.; Mitov, A.; Poghosyan, V.; Ślusarczyk, M.; Steinhauser, M.
2007-01-01
Combining our results for various O(αs2) corrections to the weak radiative B-meson decay, we are able to present the first estimate of the branching ratio at the next-to-next-to-leading order in QCD. We find B(B¯→Xsγ)=(3.15±0.23)×10-4 for Eγ>1.6GeV in the B¯-meson rest frame. The four types of uncertainties: nonperturbative (5%), parametric (3%), higher-order (3%), and mc-interpolation ambiguity (3%) have been added in quadrature to obtain the total error.
NASA Technical Reports Server (NTRS)
Chamitoff, Gregory Errol
1992-01-01
Intelligent optimization methods are applied to the problem of real-time flight control for a class of airbreathing hypersonic vehicles (AHSV). The extreme flight conditions that will be encountered by single-stage-to-orbit vehicles, such as the National Aerospace Plane, present a tremendous challenge to the entire spectrum of aerospace technologies. Flight control for these vehicles is particularly difficult due to the combination of nonlinear dynamics, complex constraints, and parametric uncertainty. An approach that utilizes all available a priori and in-flight information to perform robust, real time, short-term trajectory planning is presented.
Diffractive heavy quark production in AA collisions at the LHC at NLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machado, M. M.; Ducati, M. B. Gay; Machado, M. V. T.
2011-07-15
The single and double diffractive cross sections for heavy quarks production are evaluated at NLO accuracy for hadronic and heavy ion collisions at the LHC. Diffractive charm and bottom production is the main subject of this work, providing predictions for CaCa, PbPb and pPb collisions. The hard diffraction formalism is considered using the Ingelman-Schlein model where a recent parametrization for the Pomeron structure function (DPDF) is applied. Absorptive corrections are taken into account as well. The diffractive ratios are estimated and theoretical uncertainties are discussed. Comparison with competing production channels is also presented.
Diffractive heavy quark production in AA collisions at the LHC at NLO
NASA Astrophysics Data System (ADS)
Machado, M. M.; Ducati, M. B. Gay; Machado, M. V. T.
2011-07-01
The single and double diffractive cross sections for heavy quarks production are evaluated at NLO accuracy for hadronic and heavy ion collisions at the LHC. Diffractive charm and bottom production is the main subject of this work, providing predictions for CaCa, PbPb and pPb collisions. The hard diffraction formalism is considered using the Ingelman-Schlein model where a recent parametrization for the Pomeron structure function (DPDF) is applied. Absorptive corrections are taken into account as well. The diffractive ratios are estimated and theoretical uncertainties are discussed. Comparison with competing production channels is also presented.
Nonlinear discrete-time multirate adaptive control of non-linear vibrations of smart beams
NASA Astrophysics Data System (ADS)
Georgiou, Georgios; Foutsitzi, Georgia A.; Stavroulakis, Georgios E.
2018-06-01
The nonlinear adaptive digital control of a smart piezoelectric beam is considered. It is shown that in the case of a sampled-data context, a multirate control strategy provides an appropriate framework in order to achieve vibration regulation, ensuring the stability of the whole control system. Under parametric uncertainties in the model parameters (damping ratios, frequencies, levels of non linearities and cross coupling, control input parameters), the scheme is completed with an adaptation law deduced from hyperstability concepts. This results in the asymptotic satisfaction of the control objectives at the sampling instants. Simulation results are presented.
Ultrasonic guided wave tomography for wall thickness mapping in pipes
NASA Astrophysics Data System (ADS)
Willey, Carson L.
Corrosion and erosion damage pose fundamental challenges to operation of oil and gas infrastructure. In order to manage the life of critical assets, plant operators must implement inspection programs aimed at assessing the severity of wall thickness loss (WTL) in pipelines, vessels, and other structures. Maximum defect depth determines the residual life of these structures and therefore represents one of the key parameters for robust damage mitigation strategies. In this context, continuous monitoring with permanently installed sensors has attracted significant interest and currently is the subject of extensive research worldwide. Among the different monitoring approaches being considered, significant promise is offered by the combination of guided ultrasonic wave technology with the principles of model based inversion under the paradigm of what is now referred to as guided wave tomography (GWT). Guided waves are attractive because they propagate inside the wall of a structure over a large distance. This can yield significant advantages over conventional pulse-echo thickness gage sensors that provide insufficient area coverage -- typically limited to the sensor footprint. While significant progress has been made in the application of GWT to plate-like structures, extension of these methods to pipes poses a number of fundamental challenges that have prevented the development of sensitive GWT methods. This thesis focuses on these challenges to address the complex guided wave propagation in pipes and to account for parametric uncertainties that are known to affect model based inversion and which are unavoidable in real field applications. The main contribution of this work is the first demonstration of a sensitive GWT method for accurately mapping the depth of defects in pipes. This is achieved by introducing a novel forward model that can extract information related to damage from the complex waveforms measured by pairs of guided wave transducers mounted on the pipe. An inversion method that iteratively uses the forward model is then developed to form a map of wall thickness for the entire pipe section comprised between two ring arrays of ultrasonic transducers that encircle the pipe. It is shown that time independent parametric uncertainties relative to the pipe manufacturing tolerances, transducers position, and ultrasonic properties of the material of the pipe can be minimized through a differential approach that is aimed at determining the change in state of the pipe relative to a reference condition. On the other hand, time dependent parametric uncertainties, such as those caused by temperature variations, can be addressed by exploiting the spatial diversity of array measurements and the non-contact nature of electromagnetic acoustic transducers (EMATs). The range of possible applications of GWT to pipes is investigated through theoretical and numerical studies aimed at developing an understanding of how the performance of GWT varies depending on damage morphology, pipe geometry, and array configuration.
NASA Astrophysics Data System (ADS)
Mooney, Robin P.; McFadden, Shaun
2017-12-01
In-situ observation of crystal growth in transparent media allows us to observe solidification phase change in real-time. These systems are analogous to opaque systems such as metals. The interpretation of transient 2-dimensional area projections from 3-dimensional phase change phenomena occurring in a bulky sample is problematic due to uncertainty of impingement and hidden nucleation events; in stereology this problem is known as over-projection. This manuscript describes and demonstrates a continuous model for nucleation and growth using the well-established Johnson-Mehl-Avrami-Kolmogorov model, and provides a method to relate 3-dimensional volumetric data (nucleation events, volume fraction) to observed data in a 2-dimensional projection (nucleation count, area fraction). A parametric analysis is performed; the projection phenomenon is shown to be significant in cases where nucleation is occurring continuously with a relatively large variance. In general, area fraction on a projection plane will overestimate the volume fraction within the sample and the nuclei count recorded on the projection plane will underestimate the number of real nucleation events. The statistical framework given in this manuscript provides a methodology to deal with the differences between the observed (projected) data and the real (volumetric) measures.
Critical aspects of data analysis for quantification in laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Motto-Ros, V.; Syvilay, D.; Bassel, L.; Negre, E.; Trichard, F.; Pelascini, F.; El Haddad, J.; Harhira, A.; Moncayo, S.; Picard, J.; Devismes, D.; Bousquet, B.
2018-02-01
In this study, a collaborative contest focused on LIBS data processing has been conducted in an original way since the participants did not share the same samples to be analyzed on their own LIBS experiments but a set of LIBS spectra obtained from one single experiment. Each participant was asked to provide the predicted concentrations of several elements for two glass samples. The analytical contest revealed a wide diversity of results among participants, even when the same spectral lines were considered for the analysis. Then, a parametric study was conducted to investigate the influence of each step during the data processing. This study was based on several analytical figures of merit such as the determination coefficient, uncertainty, limit of quantification and prediction ability (i.e., trueness). Then, it was possible to interpret the results provided by the participants, emphasizing the fact that the type of data extraction, baseline modeling as well as the calibration model play key roles in the quantification performance of the technique. This work provides a set of recommendations based on a systematic evaluation of the quantification procedure with the aim of optimizing the methodological steps toward the standardization of LIBS.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
QUANTIFYING ALTERNATIVE SPLICING FROM PAIRED-END RNA-SEQUENCING DATA.
Rossell, David; Stephan-Otto Attolini, Camille; Kroiss, Manuel; Stöcker, Almond
2014-03-01
RNA-sequencing has revolutionized biomedical research and, in particular, our ability to study gene alternative splicing. The problem has important implications for human health, as alternative splicing may be involved in malfunctions at the cellular level and multiple diseases. However, the high-dimensional nature of the data and the existence of experimental biases pose serious data analysis challenges. We find that the standard data summaries used to study alternative splicing are severely limited, as they ignore a substantial amount of valuable information. Current data analysis methods are based on such summaries and are hence sub-optimal. Further, they have limited flexibility in accounting for technical biases. We propose novel data summaries and a Bayesian modeling framework that overcome these limitations and determine biases in a non-parametric, highly flexible manner. These summaries adapt naturally to the rapid improvements in sequencing technology. We provide efficient point estimates and uncertainty assessments. The approach allows to study alternative splicing patterns for individual samples and can also be the basis for downstream analyses. We found a several fold improvement in estimation mean square error compared popular approaches in simulations, and substantially higher consistency between replicates in experimental data. Our findings indicate the need for adjusting the routine summarization and analysis of alternative splicing RNA-seq studies. We provide a software implementation in the R package casper.
Stability analysis of magnetized neutron stars - a semi-analytic approach
NASA Astrophysics Data System (ADS)
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Studies of QCD structure in high-energy collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadolsky, Pavel M.
2016-06-26
”Studies of QCD structure in high-energy collisions” is a research project in theoretical particle physics at Southern Methodist University funded by US DOE Award DE-SC0013681. The award furnished bridge funding for one year (2015/04/15-2016/03/31) between the periods funded by Nadolsky’s DOE Early Career Research Award DE-SC0003870 (in 2010-2015) and a DOE grant DE-SC0010129 for SMU Department of Physics (starting in April 2016). The primary objective of the research is to provide theoretical predictions for Run-2 of the CERN Large Hadron Collider (LHC). The LHC physics program relies on state-of-the-art predictions in the field of quantum chromodynamics. The main effort ofmore » our group went into the global analysis of parton distribution functions (PDFs) employed by the bulk of LHC computations. Parton distributions describe internal structure of protons during ultrarelivistic collisions. A new generation of CTEQ parton distribution functions (PDFs), CT14, was released in summer 2015 and quickly adopted by the HEP community. The new CT14 parametrizations of PDFs were obtained using benchmarked NNLO calculations and latest data from LHC and Tevatron experiments. The group developed advanced methods for the PDF analysis and estimation of uncertainties in LHC predictions associated with the PDFs. We invented and refined a new ’meta-parametrization’ technique that streamlines usage of PDFs in Higgs boson production and other numerous LHC processes, by combining PDFs from various groups using multivariate stochastic sampling. In 2015, the PDF4LHC working group recommended to LHC experimental collaborations to use ’meta-parametrizations’ as a standard technique for computing PDF uncertainties. Finally, to include new QCD processes into the global fits, our group worked on several (N)NNLO calculations.« less
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Parametric models of reflectance spectra for dyed fabrics
NASA Astrophysics Data System (ADS)
Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph
2016-05-01
This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
Numerical prediction of 3-D ejector flows
NASA Technical Reports Server (NTRS)
Roberts, D. W.; Paynter, G. C.
1979-01-01
The use of parametric flow analysis, rather than parametric scale testing, to support the design of an ejector system offers a number of potential advantages. The application of available 3-D flow analyses to the design ejectors can be subdivided into several key elements. These are numerics, turbulence modeling, data handling and display, and testing in support of analysis development. Experimental and predicted jet exhaust for the Boeing 727 aircraft are examined.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gasbarri, Paolo; Monti, Riccardo; Campolo, Giovanni; Toglia, Chiara
2012-12-01
The design of large space structures (LSS) requires the use of design and analysis tools that include different disciplines. For such a kind of spacecrafts it is in fact mandatory that mechanical design and guidance navigation and control (GNC) design are developed within a common framework. One of the key-points in the development of LSS is related to the dynamic phenomena. These phenomena usually lead to two different interpretations. The former one is related to the overall motion of the spacecraft, i.e., the motion of the centre of gravity and motion around the centre of gravity. The latter one is related to the local motion of the elastic elements that leads to oscillations. These oscillations have in turn a disturbing effect on the motion of the spacecraft. From an engineering perspective, the structural model of flexible spacecrafts is generally obtained via FEM involving thousands of degrees of freedom (DOFs). Many of them are not significant from the attitude control point of view. One of the procedures to reduce the structural DOFs is tied to the modal decomposition technique. In the present paper a technique to develop a control-oriented structural model will be proposed. Starting from a detailed FE model of the spacecraft and using a special modal condensation approach, a continuous model is defined. With this transformation the number of DOFs necessary to study the coupled elastic/rigid dynamic is reduced. The final dynamic model will be suitable for the control design implementation. In order to properly design a satellite controller, it is important to recall that the characteristic parameters of the satellite are uncertain. The effect that uncertainties have on control performance must be investigated. A possible solution is that, after the attitude controller is designed on the nominal model, a Verification and Validation (V&V) process is performed to guarantee a correct functionality under a large number of scenarios. The V&V process can be very lengthy and expensive: difficulty and cost do increase because of the overall system dimension that depends on the number of uncertainties. Uncertain parameters have to be parametrically investigated to determine robust performance of the control laws via gridding approaches. In particular in this paper we propose to consider two methods: (i) a conventional Monte Carlo analysis, and (ii) a worst-case analysis, i.e., an optimization process to find an estimation of the true worst-case behaviour. Both techniques allow to verify that the design is robust enough to meet the system performance specification in case of uncertainties.
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
NASA Astrophysics Data System (ADS)
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Chaotic map clustering algorithm for EEG analysis
NASA Astrophysics Data System (ADS)
Bellotti, R.; De Carlo, F.; Stramaglia, S.
2004-03-01
The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.
Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung
2014-10-01
Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.
Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.
2015-01-01
The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.
Community drinking water quality monitoring data: utility for public health research and practice.
Jones, Rachael M; Graber, Judith M; Anderson, Robert; Rockne, Karl; Turyk, Mary; Stayner, Leslie T
2014-01-01
Environmental Public Health Tracking (EPHT) tracks the occurrence and magnitude of environmental hazards and associated adverse health effects over time. The EPHT program has formally expanded its scope to include finished drinking water quality. Our objective was to describe the features, strengths, and limitations of using finished drinking water quality data from community water systems (CWSs) for EPHT applications, focusing on atrazine and nitrogen compounds in 8 Midwestern states. Water quality data were acquired after meeting with state partners and reviewed and merged for analysis. Data and the coding of variables, particularly with respect to censored results (nondetects), were not standardized between states. Monitoring frequency varied between CWSs and between atrazine and nitrates, but this was in line with regulatory requirements. Cumulative distributions of all contaminants were not the same in all states (Peto-Prentice test P < .001). Atrazine results were highly censored in all states (76.0%-99.3%); higher concentrations were associated with increased measurement frequency and surface water as the CWS source water type. Nitrate results showed substantial state-to-state variability in censoring (20.5%-100%) and in associations between concentrations and the CWS source water type. Statistical analyses of these data are challenging due to high rates of censoring and uncertainty about the appropriateness of parametric assumptions for time-series data. Although monitoring frequency was consistent with regulations, the magnitude of time gaps coupled with uncertainty about CWS service areas may limit linkage with health outcome data.
An appraisal of statistical procedures used in derivation of reference intervals.
Ichihara, Kiyoshi; Boyd, James C
2010-11-01
When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.
Study of Aerothermodynamic Modeling Issues Relevant to High-Speed Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.
2014-01-01
This paper examines the application of state-of-the-art coupled ablation and radiation simulations to highspeed sample return vehicles, such as those returning from Mars or an asteroid. A defining characteristic of these entries is that the surface recession rates and temperatures are driven by nonequilibrium convective and radiative heating through a boundary layer with significant surface blowing and ablation products. Measurements relevant to validating the simulation of these phenomena are reviewed and the Stardust entry is identified as providing the best relevant measurements. A coupled ablation and radiation flowfield analysis is presented that implements a finite-rate surface chemistry model. Comparisons between this finite-rate model and a equilibrium ablation model show that, while good agreement is seen for diffusion-limited oxidation cases, the finite-rate model predicts up to 50% lower char rates than the equilibrium model at sublimation conditions. Both the equilibrium and finite rate models predict significant negative mass flux at the surface due to sublimation of atomic carbon. A sensitivity analysis to flowfield and surface chemistry rates show that, for a sample return capsule at 10, 12, and 14 km/s, the sublimation rates for C and C3 provide the largest changes to the convective flux, radiative flux, and char rate. A parametric uncertainty analysis of the radiative heating due to radiation modeling parameters indicates uncertainties ranging from 27% at 10 km/s to 36% at 14 km/s. Applying the developed coupled analysis to the Stardust entry results in temperatures within 10% of those inferred from observations, and final recession values within 20% of measurements, which improves upon the 60% over-prediction at the stagnation point obtained through an uncoupled analysis. Emission from CN Violet is shown to be over-predicted by nearly and order-of-magnitude, which is consistent with the results of previous independent analyses. Finally, the coupled analysis is applied to a 14 km/s Earth entry representative of a Mars sample return. Although the radiative heating provides a larger fraction of the total heating, the influence of ablation and radiation on the flowfield are shown to be similar to Stardust.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Key, Kerry; Bodin, Thomas; Myer, David; Constable, Steven
2014-12-01
We apply a reversible-jump Markov chain Monte Carlo method to sample the Bayesian posterior model probability density function of 2-D seafloor resistivity as constrained by marine controlled source electromagnetic data. This density function of earth models conveys information on which parts of the model space are illuminated by the data. Whereas conventional gradient-based inversion approaches require subjective regularization choices to stabilize this highly non-linear and non-unique inverse problem and provide only a single solution with no model uncertainty information, the method we use entirely avoids model regularization. The result of our approach is an ensemble of models that can be visualized and queried to provide meaningful information about the sensitivity of the data to the subsurface, and the level of resolution of model parameters. We represent models in 2-D using a Voronoi cell parametrization. To make the 2-D problem practical, we use a source-receiver common midpoint approximation with 1-D forward modelling. Our algorithm is transdimensional and self-parametrizing where the number of resistivity cells within a 2-D depth section is variable, as are their positions and geometries. Two synthetic studies demonstrate the algorithm's use in the appraisal of a thin, segmented, resistive reservoir which makes for a challenging exploration target. As a demonstration example, we apply our method to survey data collected over the Scarborough gas field on the Northwest Australian shelf.
A TIERED APPROACH TO PERFORMING UNCERTAINTY ANALYSIS IN CONDUCTING EXPOSURE ANALYSIS FOR CHEMICALS
The WHO/IPCS draft Guidance Document on Characterizing and Communicating Uncertainty in Exposure Assessment provides guidance on recommended strategies for conducting uncertainty analysis as part of human exposure analysis. Specifically, a tiered approach to uncertainty analysis ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coolens, Catherine, E-mail: catherine.coolens@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario
2015-01-01
Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from amore » 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions: Parametric voxel-based analysis of 4D DCE CT data resulted in greater accuracy and reliability in measuring changes in perfusion CT-based kinetic metrics, which have the potential to be used as biomarkers in patients with metastatic brain cancer.« less
Yadage and Packtivity - analysis preservation using parametrized workflows
NASA Astrophysics Data System (ADS)
Cranmer, Kyle; Heinrich, Lukas
2017-10-01
Preserving data analyses produced by the collaborations at LHC in a parametrized fashion is crucial in order to maintain reproducibility and re-usability. We argue for a declarative description in terms of individual processing steps - “packtivities” - linked through a dynamic directed acyclic graph (DAG) and present an initial set of JSON schemas for such a description and an implementation - “yadage” - capable of executing workflows of analysis preserved via Linux containers.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
Jenouvrier, Stéphanie; Holland, Marika; Stroeve, Julienne; Barbraud, Christophe; Weimerskirch, Henri; Serreze, Mark; Caswell, Hal
2012-09-01
Sea ice conditions in the Antarctic affect the life cycle of the emperor penguin (Aptenodytes forsteri). We present a population projection for the emperor penguin population of Terre Adélie, Antarctica, by linking demographic models (stage-structured, seasonal, nonlinear, two-sex matrix population models) to sea ice forecasts from an ensemble of IPCC climate models. Based on maximum likelihood capture-mark-recapture analysis, we find that seasonal sea ice concentration anomalies (SICa ) affect adult survival and breeding success. Demographic models show that both deterministic and stochastic population growth rates are maximized at intermediate values of annual SICa , because neither the complete absence of sea ice, nor heavy and persistent sea ice, would provide satisfactory conditions for the emperor penguin. We show that under some conditions the stochastic growth rate is positively affected by the variance in SICa . We identify an ensemble of five general circulation climate models whose output closely matches the historical record of sea ice concentration in Terre Adélie. The output of this ensemble is used to produce stochastic forecasts of SICa , which in turn drive the population model. Uncertainty is included by incorporating multiple climate models and by a parametric bootstrap procedure that includes parameter uncertainty due to both model selection and estimation error. The median of these simulations predicts a decline of the Terre Adélie emperor penguin population of 81% by the year 2100. We find a 43% chance of an even greater decline, of 90% or more. The uncertainty in population projections reflects large differences among climate models in their forecasts of future sea ice conditions. One such model predicts population increases over much of the century, but overall, the ensemble of models predicts that population declines are far more likely than population increases. We conclude that climate change is a significant risk for the emperor penguin. Our analytical approach, in which demographic models are linked to IPCC climate models, is powerful and generally applicable to other species and systems. © 2012 Blackwell Publishing Ltd.
Drought and heatwaves in Europe: historical reconstruction and future projections
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Rakovec, Olda; Wood, Eric; Sheffield, Justin; Pan, Ming; Wanders, Niko; Prudhomme, Christel
2017-04-01
Heat waves and droughts are creeping hydro-meteorological events that may bring societies and natural systems to their limits by inducing large famines, increasing health risks to the population, creating drinking and irrigation water shortfalls, inducing natural fires and degradation of soil and water quality, and in many cases causing large socio-economic losses. Europe, in particular, has endured large scale drought-heat-wave events during the recent past (e.g., 2003 European drought), which have induced enormous socio-economic losses as well as casualties. Recent studies showed that the prediction of droughts and heatwaves is subject to large-scale forcing and parametric uncertainties that lead to considerable uncertainties in the projections of extreme characteristics such as drought magnitude/duration and area under drought, among others. Future projections are also heavily influenced by the RCP scenario uncertainty as well as the coarser spatial resolution of the models. The EDgE project funded by the Copernicus programme (C3S) provides an unique opportunity to investigate the evolution of droughts and heatwaves from 1950 until 2099 over the Pan-EU domain at a scale of 5x5 km2. In this project, high-resolution multi-model hydrologic simulations with the mHM (www.ufz.de/mhm), Noah-MP, VIC and PCR-GLOBWB have been completed for the historical period 1955-2015. Climate projections have been carried out with five CMIP-5 GCMs: GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, NorESM1-M from 2006 to 2099 under RCP2.6 and RCP8.5. Using these multi-model unprecedented simulations, daily soil moisture index and temperature anomalies since 1955 until 2099 will be estimated. Using the procedure proposed by Samaniego et al. (2013), the probabilities of exceeding the benchmark events in the reference period 1980-2010 will be estimated for each RCP scenario. References http://climate.copernicus.eu/edge-end-end-demonstrator-improved-decision-making-water-sector-europe Samaniego, L., R. Kumar, and M. Zink, 2013: Implications of parameter uncertainty on soil moisture drought analysis in Germany. J. Hydrometeor., 14, 47-68, doi:10.1175/JHM-D-12-075.1. Samaniego, L., et al. 2016: Propagation of forcing and model uncertainties on to hydrological drought characteristics in a multi-model century-long experiment in large river basins. Climatic Change. 1-15.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Predicting climate change: Uncertainties and prospects for surmounting them
NASA Astrophysics Data System (ADS)
Ghil, Michael
2008-03-01
General circulation models (GCMs) are among the most detailed and sophisticated models of natural phenomena in existence. Still, the lack of robust and efficient subgrid-scale parametrizations for GCMs, along with the inherent sensitivity to initial data and the complex nonlinearities involved, present a major and persistent obstacle to narrowing the range of estimates for end-of-century warming. Estimating future changes in the distribution of climatic extrema is even more difficult. Brute-force tuning the large number of GCM parameters does not appear to help reduce the uncertainties. Andronov and Pontryagin (1937) proposed structural stability as a way to evaluate model robustness. Unfortunately, many real-world systems proved to be structurally unstable. We illustrate these concepts with a very simple model for the El Niño--Southern Oscillation (ENSO). Our model is governed by a differential delay equation with a single delay and periodic (seasonal) forcing. Like many of its more or less detailed and realistic precursors, this model exhibits a Devil's staircase. We study the model's structural stability, describe the mechanisms of the observed instabilities, and connect our findings to ENSO phenomenology. In the model's phase-parameter space, regions of smooth dependence on parameters alternate with rough, fractal ones. We then apply the tools of random dynamical systems and stochastic structural stability to the circle map and a torus map. The effect of noise with compact support on these maps is fairly intuitive: it is the most robust structures in phase-parameter space that survive the smoothing introduced by the noise. The nature of the stochastic forcing matters, thus suggesting that certain types of stochastic parametrizations might be better than others in achieving GCM robustness. This talk represents joint work with M. Chekroun, E. Simonnet and I. Zaliapin.
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
A program for the Bayesian Neural Network in the ROOT framework
NASA Astrophysics Data System (ADS)
Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang
2011-12-01
We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
Decorrelated jet substructure tagging using adversarial neural networks
NASA Astrophysics Data System (ADS)
Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas
2017-10-01
We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.