Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Xu, Shidong; Sun, Guanghui; Sun, Weichao
2017-01-01
In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Khazaee, Mostafa; Markazi, Amir H D; Omidi, Ehsan
2015-11-01
In this paper, a new Adaptive Fuzzy Predictive Sliding Mode Control (AFP-SMC) is presented for nonlinear systems with uncertain dynamics and unknown input delay. The control unit consists of a fuzzy inference system to approximate the ideal linearization control, together with a switching strategy to compensate for the estimation errors. Also, an adaptive fuzzy predictor is used to estimate the future values of the system states to compensate for the time delay. The adaptation laws are used to tune the controller and predictor parameters, which guarantee the stability based on a Lyapunov-Krasovskii functional. To evaluate the method effectiveness, the simulation and experiment on an overhead crane system are presented. According to the obtained results, AFP-SMC can effectively control the uncertain nonlinear systems, subject to input delays of known bound. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1990-01-01
A game theoretic controller is developed for a linear time-invariant system with parameter uncertainties in system and input matrices. The input-output decomposition modeling for the plant uncertainty is adopted. The uncertain dynamic system is represented as an internal feedback loop in which the system is assumed forced by fictitious disturbance caused by the parameter uncertainty. By considering the input and the fictitious disturbance as two noncooperative players, a differential game problem is constructed. It is shown that the resulting time invariant controller stabilizes the uncertain system for a prescribed uncertainty bound. This game theoretic controller is applied to the momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Inclusion of the external disturbance torque to the design procedure results in a dynamical feedback controller which consists of conventional PID control and cyclic disturbance rejection filter. It is shown that the game theoretic design, comparing to the LQR design or pole placement design, improves the stability robustness with respect to inertia variations.
Biehler, J; Wall, W A
2018-02-01
If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
Feedforward/feedback control synthesis for performance and robustness
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang
1990-01-01
Both feedforward and feedback control approaches for uncertain dynamical systems are investigated. The control design objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant modeling uncertainty. Preshapong of an ideal, time-optimal control input using a 'tapped-delay' filter is shown to provide a rapid maneuver with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. The proposed feedforward/feedback control approach is robust for a certain class of uncertain dynamical systems, since the control input command computed for a given desired output does not depend on the plant parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Rodney O.; Passalacqua, Alberto
2016-02-01
Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
1984-12-01
input/output relationship. These are obtained from the design specifications (10:68i-684). Note that the first digit of the subscript of bkj refers...to the output and the second digit to the input. Thus, bkj is.a function of the response requirements on the output, Yk’ due to the input, r.. 169 . A...NXPMAX pNYPMAX, IPLOT) C C C* LIBARY OF PLOT SUBR(OUTINES PSNTCT NLIEPRINTER ONLY~ C* C C C SUP’ LPLOTS C C C DIMENSION IXY(101,71)918UF(100) COMMON /HOPY
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
Orbit control of a stratospheric satellite with parameter uncertainties
NASA Astrophysics Data System (ADS)
Xu, Ming; Huo, Wei
2016-12-01
When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
NASA Astrophysics Data System (ADS)
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
Observer-based state tracking control of uncertain stochastic systems via repetitive controller
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Susana Ramya, L.; Selvaraj, P.
2017-08-01
This paper develops the repetitive control scheme for state tracking control of uncertain stochastic time-varying delay systems via equivalent-input-disturbance approach. The main purpose of this work is to design a repetitive controller to guarantee the tracking performance under the effects of unknown disturbances with bounded frequency and parameter variations. Specifically, a new set of linear matrix inequality (LMI)-based conditions is derived based on the suitable Lyapunov-Krasovskii functional theory for designing a repetitive controller which guarantees stability and desired tracking performance. More precisely, an equivalent-input-disturbance estimator is incorporated into the control design to reduce the effect of the external disturbances. Simulation results are provided to demonstrate the desired control system stability and their tracking performance. A practical stream water quality preserving system is also provided to show the effectiveness and advantage of the proposed approach.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.
2013-08-01
Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.
Dealing with uncertainty in modeling intermittent water supply
NASA Astrophysics Data System (ADS)
Lieb, A. M.; Rycroft, C.; Wilkening, J.
2015-12-01
Intermittency in urban water supply affects hundreds of millions of people in cities around the world, impacting water quality and infrastructure. Building on previous work to dynamically model the transient flows in water distribution networks undergoing frequent filling and emptying, we now consider the hydraulic implications of uncertain input data. Water distribution networks undergoing intermittent supply are often poorly mapped, and household metering frequently ranges from patchy to nonexistent. In the face of uncertain pipe material, pipe slope, network connectivity, and outflow, we investigate how uncertainty affects dynamical modeling results. We furthermore identify which parameters exert the greatest influence on uncertainty, helping to prioritize data collection.
Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen
2015-01-01
Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...
Robust H ∞ Control for Spacecraft Rendezvous with a Noncooperative Target
Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang
2013-01-01
The robust H ∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H ∞ performance and finite time performance are proposed, and a robust H ∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller. PMID:24027446
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
NASA Astrophysics Data System (ADS)
Bates, Matthew E.; Keisler, Jeffrey M.; Zussblatt, Niels P.; Plourde, Kenton J.; Wender, Ben A.; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis—methods commonly applied in financial and operations management—to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios—combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
Bates, Matthew E; Keisler, Jeffrey M; Zussblatt, Niels P; Plourde, Kenton J; Wender, Ben A; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis-methods commonly applied in financial and operations management-to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios-combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Song, Qi; Song, Yong-Duan
2011-12-01
This paper investigates the position and velocity tracking control problem of high-speed trains with multiple vehicles connected through couplers. A dynamic model reflecting nonlinear and elastic impacts between adjacent vehicles as well as traction/braking nonlinearities and actuation faults is derived. Neuroadaptive fault-tolerant control algorithms are developed to account for various factors such as input nonlinearities, actuator failures, and uncertain impacts of in-train forces in the system simultaneously. The resultant control scheme is essentially independent of system model and is primarily data-driven because with the appropriate input-output data, the proposed control algorithms are capable of automatically generating the intermediate control parameters, neuro-weights, and the compensation signals, literally producing the traction/braking force based upon input and response data only--the whole process does not require precise information on system model or system parameter, nor human intervention. The effectiveness of the proposed approach is also confirmed through numerical simulations.
Tahoun, A H
2017-01-01
In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Feedback system design with an uncertain plant
NASA Technical Reports Server (NTRS)
Milich, D.; Valavani, L.; Athans, M.
1986-01-01
A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Koo, Min-Sung; Choi, Ho-Lim
2018-01-01
In this paper, we consider a control problem for a class of uncertain nonlinear systems in which there exists an unknown time-varying delay in the input and lower triangular nonlinearities. Usually, in the existing results, input delays have been coupled with feedforward (or upper triangular) nonlinearities; in other words, the combination of lower triangular nonlinearities and input delay has been rare. Motivated by the existing controller for input-delayed chain of integrators with nonlinearity, we show that the control of input-delayed nonlinear systems with two particular types of lower triangular nonlinearities can be done. As a control solution, we propose a newly designed feedback controller whose main features are its dynamic gain and non-predictor approach. Three examples are given for illustration.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Adaptive Control for Uncertain Nonlinear Multi-Input Multi-Output Systems
NASA Technical Reports Server (NTRS)
Cao, Chengyu (Inventor); Hovakimyan, Naira (Inventor); Xargay, Enric (Inventor)
2014-01-01
Systems and methods of adaptive control for uncertain nonlinear multi-input multi-output systems in the presence of significant unmatched uncertainty with assured performance are provided. The need for gain-scheduling is eliminated through the use of bandwidth-limited (low-pass) filtering in the control channel, which appropriately attenuates the high frequencies typically appearing in fast adaptation situations and preserves the robustness margins in the presence of fast adaptation.
Terminal sliding mode tracking control for a class of SISO uncertain nonlinear systems.
Chen, Mou; Wu, Qing-Xian; Cui, Rong-Xin
2013-03-01
In this paper, the terminal sliding mode tracking control is proposed for the uncertain single-input and single-output (SISO) nonlinear system with unknown external disturbance. For the unmeasured disturbance of nonlinear systems, terminal sliding mode disturbance observer is presented. The developed disturbance observer can guarantee the disturbance approximation error to converge to zero in the finite time. Based on the output of designed disturbance observer, the terminal sliding mode tracking control is presented for uncertain SISO nonlinear systems. Subsequently, terminal sliding mode tracking control is developed using disturbance observer technique for the uncertain SISO nonlinear system with control singularity and unknown non-symmetric input saturation. The effects of the control singularity and unknown input saturation are combined with the external disturbance which is approximated using the disturbance observer. Under the proposed terminal sliding mode tracking control techniques, the finite time convergence of all closed-loop signals are guaranteed via Lyapunov analysis. Numerical simulation results are given to illustrate the effectiveness of the proposed terminal sliding mode tracking control. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter
2016-10-01
Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
Cluster synchronization transmission of different external signals in discrete uncertain network
NASA Astrophysics Data System (ADS)
Li, Chengren; Lü, Ling; Chen, Liansong; Hong, Yixuan; Zhou, Shuang; Yang, Yiming
2018-07-01
We research cluster synchronization transmissions of different external signals in discrete uncertain network. Based on the Lyapunov theorem, the network controller and the identification law of uncertain adjustment parameter are designed, and they are efficiently used to achieve the cluster synchronization and the identification of uncertain adjustment parameter. In our technical scheme, the network nodes in each cluster and the transmitted external signal can be different, and they allow the presence of uncertain parameters in the network. Especially, we are free to choose the clustering topologies, the cluster number and the node number in each cluster.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
2006-12-01
on at any time from a family of candidate feedback-gains so as to control a discrete- time input-saturated LTI system possibly subject to persistent... times robustness Mosca, E. (2006) Control of Uncertain Systems under Constraints: Switching Horizon Predictive Control of Persistently Disturbed...feedback controls u = f(x̂) (3) so as to ensure, under suitable conditions, stability in the noiseless case as well as finite l∞-induced gain of the
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles
NASA Astrophysics Data System (ADS)
Wilcox, Zachary Donald
The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
Modeling transport phenomena and uncertainty quantification in solidification processes
NASA Astrophysics Data System (ADS)
Fezi, Kyle S.
Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
NASA Astrophysics Data System (ADS)
Azizi, S.; Torres, L. A. B.; Palhares, R. M.
2018-01-01
The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
Robust synergetic control design under inputs and states constraints
NASA Astrophysics Data System (ADS)
Rastegar, Saeid; Araújo, Rui; Sadati, Jalil
2018-03-01
In this paper, a novel robust-constrained control methodology for discrete-time linear parameter-varying (DT-LPV) systems is proposed based on a synergetic control theory (SCT) approach. It is shown that in DT-LPV systems without uncertainty, and for any unmeasured bounded additive disturbance, the proposed controller accomplishes the goal of stabilising the system by asymptotically driving the error of the controlled variable to a bounded set containing the origin and then maintaining it there. Moreover, given an uncertain DT-LPV system jointly subject to unmeasured and constrained additive disturbances, and constraints in states, input commands and reference signals (set points), then invariant set theory is used to find an appropriate polyhedral robust invariant region in which the proposed control framework is guaranteed to robustly stabilise the closed-loop system. Furthermore, this is achieved even for the case of varying non-zero control set points in such uncertain DT-LPV systems. The controller is characterised to have a simple structure leading to an easy implementation, and a non-complex design process. The effectiveness of the proposed method and the implications of the controller design on feasibility and closed-loop performance are demonstrated through application examples on the temperature control on a continuous-stirred tank reactor plant, on the control of a real-coupled DC motor plant, and on an open-loop unstable system example.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment
NASA Astrophysics Data System (ADS)
Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.
2015-01-01
In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.
NASA Astrophysics Data System (ADS)
Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.
2015-12-01
One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Liu, Yan-Jun; Tong, Shaocheng
2015-03-01
In the paper, an adaptive tracking control design is studied for a class of nonlinear discrete-time systems with dead-zone input. The considered systems are of the nonaffine pure-feedback form and the dead-zone input appears nonlinearly in the systems. The contributions of the paper are that: 1) it is for the first time to investigate the control problem for this class of discrete-time systems with dead-zone; 2) there are major difficulties for stabilizing such systems and in order to overcome the difficulties, the systems are transformed into an n-step-ahead predictor but nonaffine function is still existent; and 3) an adaptive compensative term is constructed to compensate for the parameters of the dead-zone. The neural networks are used to approximate the unknown functions in the transformed systems. Based on the Lyapunov theory, it is proven that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded and the tracking error converges to a small neighborhood of zero. Two simulation examples are provided to verify the effectiveness of the control approach in the paper.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
Wang, Yingyang; Hu, Jianbo
2018-05-19
An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
Anticipatory Emotions in Decision Tasks: Covert Markers of Value or Attentional Processes?
Davis, Tyler; Love, Bradley C.; Maddox, Todd
2009-01-01
Anticipatory emotions precede behavioral outcomes and provide a means to infer interactions between emotional and cognitive processes. A number of theories hold that anticipatory emotions serve as inputs to the decision process and code the value or risk associated with a stimulus. We argue that current data do not unequivocally support this theory. We present an alternative theory whereby anticipatory emotions reflect the outcome of a decision process and serve to ready the subject for new information when making an uncertain response. We test these two accounts, which we refer to as emotions-as-input and emotions-as-outcome, in a task that allows risky stimuli to be dissociated from uncertain responses. We find that emotions are associated with responses as opposed to stimuli. This finding is contrary to the emotions-as-input perspective as it shows that emotions arise from decision processes. PMID:19428002
Robust control synthesis for uncertain dynamical systems
NASA Technical Reports Server (NTRS)
Byun, Kuk-Whan; Wie, Bong; Sunkel, John
1989-01-01
This paper presents robust control synthesis techniques for uncertain dynamical systems subject to structured parameter perturbation. Both QFT (quantitative feedback theory) and H-infinity control synthesis techniques are investigated. Although most H-infinity-related control techniques are not concerned with the structured parameter perturbation, a new way of incorporating the parameter uncertainty in the robust H-infinity control design is presented. A generic model of uncertain dynamical systems is used to illustrate the design methodologies investigated in this paper. It is shown that, for a certain noncolocated structural control problem, use of both techniques results in nonminimum phase compensation.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Choi, Yun Ho; Yoo, Sung Jin
2018-06-01
This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions
NASA Astrophysics Data System (ADS)
Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.
2017-12-01
Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
Robust on-off pulse control of flexible space vehicles
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi
1993-01-01
The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.
Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector
NASA Astrophysics Data System (ADS)
Lenel, U. R.; Davies, D. G. S.; Moore, M. A.
An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
An imprecise probability approach for squeal instability analysis based on evidence theory
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-01-01
An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Energy balance for uranium recovery from seawater
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, E.; Lindner, H.
The energy return on investment (EROI) of an energy resource is the ratio of the energy it ultimately produces to the energy used to recover it. EROI is a key viability measure for a new recovery technology, particularly in its early stages of development when financial cost assessment would be premature or highly uncertain. This paper estimates the EROI of uranium recovery from seawater via a braid adsorbent technology. In this paper, the energy cost of obtaining uranium from seawater is assessed by breaking the production chain into three processes: adsorbent production, adsorbent deployment and mooring, and uranium elution andmore » purification. Both direct and embodied energy inputs are considered. Direct energy is the energy used by the processes themselves, while embodied energy is used to fabricate their material, equipment or chemical inputs. If the uranium is used in a once-through fuel cycle, the braid adsorbent technology EROI ranges from 12 to 27, depending on still-uncertain performance and system design parameters. It is highly sensitive to the adsorbent capacity in grams of U captured per kg of adsorbent as well as to potential economies in chemical use. This compares to an EROI of ca. 300 for contemporary terrestrial mining. It is important to note that these figures only consider the mineral extraction step in the fuel cycle. At a reference performance level of 2.76 g U recovered per kg adsorbent immersed, the largest energy consumers are the chemicals used in adsorbent production (63%), anchor chain mooring system fabrication and operations (17%), and unit processes in the adsorbent production step (12%). (authors)« less
NASA Astrophysics Data System (ADS)
Taha, Ahmad Fayez
Transportation networks, wearable devices, energy systems, and the book you are reading now are all ubiquitous cyber-physical systems (CPS). These inherently uncertain systems combine physical phenomena with communication, data processing, control and optimization. Many CPSs are controlled and monitored by real-time control systems that use communication networks to transmit and receive data from systems modeled by physical processes. Existing studies have addressed a breadth of challenges related to the design of CPSs. However, there is a lack of studies on uncertain CPSs subject to dynamic unknown inputs and cyber-attacks---an artifact of the insertion of communication networks and the growing complexity of CPSs. The objective of this dissertation is to create secure, computational foundations for uncertain CPSs by establishing a framework to control, estimate and optimize the operation of these systems. With major emphasis on power networks, the dissertation deals with the design of secure computational methods for uncertain CPSs, focusing on three crucial issues---(1) cyber-security and risk-mitigation, (2) network-induced time-delays and perturbations and (3) the encompassed extreme time-scales. The dissertation consists of four parts. In the first part, we investigate dynamic state estimation (DSE) methods and rigorously examine the strengths and weaknesses of the proposed routines under dynamic attack-vectors and unknown inputs. In the second part, and utilizing high-frequency measurements in smart grids and the developed DSE methods in the first part, we present a risk mitigation strategy that minimizes the encountered threat levels, while ensuring the continual observability of the system through available, safe measurements. The developed methods in the first two parts rely on the assumption that the uncertain CPS is not experiencing time-delays, an assumption that might fail under certain conditions. To overcome this challenge, networked unknown input observers---observers/estimators for uncertain CPSs---are designed such that the effect of time-delays and cyber-induced perturbations are minimized, enabling secure DSE and risk mitigation in the first two parts. The final part deals with the extreme time-scales encompassed in CPSs, generally, and smart grids, specifically. Operational decisions for long time-scales can adversely affect the security of CPSs for faster time-scales. We present a model that jointly describes steady-state operation and transient stability by combining convex optimal power flow with semidefinite programming formulations of an optimal control problem. This approach can be jointly utilized with the aforementioned parts of the dissertation work, considering time-delays and DSE. The research contributions of this dissertation furnish CPS stakeholders with insights on the design and operation of uncertain CPSs, whilst guaranteeing the system's real-time safety. Finally, although many of the results of this dissertation are tailored to power systems, the results are general enough to be applied for a variety of uncertain CPSs.
NASA Astrophysics Data System (ADS)
Chen, Liang-Ming; Lv, Yue-Yong; Li, Chuan-Jiang; Ma, Guang-Fu
2016-12-01
In this paper, we investigate cooperatively surrounding control (CSC) of multi-agent systems modeled by Euler-Lagrange (EL) equations under a directed graph. With the consideration of the uncertain dynamics in an EL system, a backstepping CSC algorithm combined with neural-networks is proposed first such that the agents can move cooperatively to surround the stationary target. Then, a command filtered backstepping CSC algorithm is further proposed to deal with the constraints on control input and the absence of neighbors’ velocity information. Numerical examples of eight satellites surrounding one space target illustrate the effectiveness of the theoretical results. Project supported by the National Basic Research Program of China (Grant No. 2012CB720000) and the National Natural Science Foundation of China (Grant Nos. 61304005 and 61403103).
Effects of modeling errors on trajectory predictions in air traffic control automation
NASA Technical Reports Server (NTRS)
Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda
1996-01-01
Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.
The hydraulic capacity of deteriorating sewer systems.
Pollert, J; Ugarelli, R; Saegrov, S; Schilling, W; Di Federico, V
2005-01-01
Sewer and wastewater systems suffer from insufficient capacity, construction flaws and pipe deterioration. Consequences are structural failures, local floods, surface erosion and pollution of receiving waters bodies. European cities spend in the order of five billion Euro per year for wastewater network rehabilitation. This amount is estimated to increase due to network ageing. The project CARE-S (Computer Aided RE-habilitation of Sewer Networks) deals with sewer and storm water networks. The final project goal is to develop integrated software, which provides the most cost-efficient system of maintenance, repair and rehabilitation of sewer networks. Decisions on investments in rehabilitation often have to be made with uncertain information about the structural condition and the hydraulic performance of a sewer system. Because of this, decision-making involves considerable risks. This paper presents the results of research focused on the study of hydraulic effects caused by failures due to temporal decline of sewer systems. Hydraulic simulations are usually carried out by running commercial models that apply, as input, default values of parameters that strongly influence results. Using CCTV inspections information as dataset to catalogue principal types of failures affecting pipes, a 3D model was used to evaluate their hydraulic consequences. The translation of failures effects in parameters values producing the same hydraulic conditions caused by failures was carried out through the comparison of laboratory experiences and 3D simulations results. Those parameters could be the input of 1D commercial models instead of the default values commonly inserted.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty
NASA Astrophysics Data System (ADS)
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.
NASA Astrophysics Data System (ADS)
Marzbanrad, Javad; Tahbaz-zadeh Moghaddam, Iman
2016-09-01
The main purpose of this paper is to design a self-tuning control algorithm for an adaptive cruise control (ACC) system that can adapt its behaviour to variations of vehicle dynamics and uncertain road grade. To this aim, short-time linear quadratic form (STLQF) estimation technique is developed so as to track simultaneously the trend of the time-varying parameters of vehicle longitudinal dynamics with a small delay. These parameters are vehicle mass, road grade and aerodynamic drag-area coefficient. Next, the values of estimated parameters are used to tune the throttle and brake control inputs and to regulate the throttle/brake switching logic that governs the throttle and brake switching. The performance of the designed STLQF-based self-tuning control (STLQF-STC) algorithm for ACC system is compared with the conventional method based on fixed control structure regarding the speed/distance tracking control modes. Simulation results show that the proposed control algorithm improves the performance of throttle and brake controllers, providing more comfort while travelling, enhancing driving safety and giving a satisfactory performance in the presence of different payloads and road grade variations.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Uncertainty in BMP evaluation and optimization for watershed management
NASA Astrophysics Data System (ADS)
Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.
2012-12-01
Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.
NASA Technical Reports Server (NTRS)
Brophy, J. R., Jr.; Wilbur, P. J.
1980-01-01
A simple theoretical model which can be used as an aid in the design of the baffle aperture region of a hollow cathode equipped ion thruster was developed. An analysis of the ion and electron currents in both the main and cathode discharge chambers is presented. From this analysis a model of current flow through the aperture, which is required as an input to the design model, was developed. This model was verified experimentally. The dominant force driving electrons through the aperture was the force due to the electrical potential gradient. The diffusion process was modeled according to the Bolm diffusion theory. A number of simplifications were made to limit the amount of detailed plasma information required as input to the model to facilitate the use of the model in thruster design. This simplified model gave remarkably consistant results with experimental results obtained with a given thruster geometry over substantial changes in operating conditions. The model was uncertain to about a factor of two for different thruster cathode region geometries. The design usefulness was limited by this factor of two uncertainty and by the accuracy to which the plasma parameters required as inputs to the model were specified.
Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.
Chang, Yeong-Chan
2009-02-01
This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.
Synchronization transmission of laser pattern signal within uncertain switched network
NASA Astrophysics Data System (ADS)
Lü, Ling; Li, Chengren; Li, Gang; Sun, Ao; Yan, Zhe; Rong, Tingting; Gao, Yan
2017-06-01
We propose a new technology for synchronization transmission of laser pattern signal within uncertain network with controllable topology. In synchronization process, the connection of dynamic network can vary at all time according to different demands. Especially, we construct the Lyapunov function of network through designing a special semi-positive definite function, and the synchronization transmission of laser pattern signal within uncertain network with controllable topology can be realized perfectly, which effectively avoids the complicated calculation for solving the second largest eignvalue of the coupling matrix of the dynamic network in order to obtain the network synchronization condition. At the same time, the uncertain parameters in dynamic equations belonging to network nodes can also be identified accurately via designing the identification laws of uncertain parameters. In addition, there are not any limitations for the synchronization target of network in the new technology, in other words, the target can either be a state variable signal of an arbitrary node within the network or an exterior signal.
Robust adaptive sliding mode control for uncertain systems with unknown time-varying delay input.
Benamor, Anouar; Messaoud, Hassani
2018-05-02
This article focuses on robust adaptive sliding mode control law for uncertain discrete systems with unknown time-varying delay input, where the uncertainty is assumed unknown. The main results of this paper are divided into three phases. In the first phase, we propose a new sliding surface is derived within the Linear Matrix Inequalities (LMIs). In the second phase, using the new sliding surface, the novel Robust Sliding Mode Control (RSMC) is proposed where the upper bound of uncertainty is supposed known. Finally, the novel approach of Robust Adaptive Sliding ModeControl (RASMC) has been defined for this type of systems, where the upper limit of uncertainty which is assumed unknown. In this new approach, we have estimate the upper limit of uncertainties and we have determined the control law based on a sliding surface that will converge to zero. This novel control laws are been validated in simulation on an uncertain numerical system with good results and comparative study. This efficiency is emphasized through the application of the new controls on the two physical systems which are the process trainer PT326 and hydraulic system two tanks. Published by Elsevier Ltd.
Effective techniques for the identification and accommodation of disturbances
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1989-01-01
The successful control of dynamic systems such as space stations, or launch vehicles, requires a controller design methodology that acknowledges and addresses the disruptive effects caused by external and internal disturbances that inevitably act on such systems. These disturbances, technically defined as uncontrollable inputs, typically vary with time in an uncertain manner and usually cannot be directly measured in real time. A relatively new non-statistical technique for modeling, and (on-line) identification, of those complex uncertain disturbances that are not as erratic and capricious as random noise is described. This technique applies to multi-input cases and to many of the practical disturbances associated with the control of space stations, or launch vehicles. Then, a collection of smart controller design techniques that allow controlled dynamic systems, with possible multi-input controls, to accommodate (cope with) such disturbances with extraordinary effectiveness are associated. These new smart controllers are designed by non-statistical techniques and typically turn out to be unconventional forms of dynamic linear controllers (compensators) with constant coefficients. The simplicity and reliability of linear, constant coefficient controllers is well-known in the aerospace field.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zheng, Mingwen; Li, Shudong; Wang, Weiping
2018-03-01
Some existing papers focused on finite-time parameter identification and synchronization, but provided incomplete theoretical analyses. Such works incorporated conflicting constraints for parameter identification, therefore, the practical significance could not be fully demonstrated. To overcome such limitations, the underlying paper presents new results of parameter identification and synchronization for uncertain complex dynamical networks with impulsive effect and stochastic perturbation based on finite-time stability theory. Novel results of parameter identification and synchronization control criteria are obtained in a finite time by utilizing Lyapunov function and linear matrix inequality respectively. Finally, numerical examples are presented to illustrate the effectiveness of our theoretical results.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Niemeyer, Frank; Simon, Ulrich; Wehner, Tim
2013-09-06
Numerical models of secondary fracture healing are based on mechanoregulatory algorithms that use distortional strain alone or in combination with either dilatational strain or fluid velocity as determining stimuli for tissue differentiation and development. Comparison of these algorithms has previously suggested that healing processes under torsional rotational loading can only be properly simulated by considering fluid velocity and deviatoric strain as the regulatory stimuli. We hypothesize that sufficient calibration on uncertain input parameters will enhance our existing model, which uses distortional and dilatational strains as determining stimuli, to properly simulate fracture healing under various loading conditions including also torsional rotation. Therefore, we minimized the difference between numerically simulated and experimentally measured courses of interfragmentary movements of two axial compressive cases and two shear load cases (torsional and translational) by varying several input parameter values within their predefined bounds. The calibrated model was then qualitatively evaluated on the ability to predict physiological changes of spatial and temporal tissue distributions, based on respective in vivo data. Finally, we corroborated the model on five additional axial compressive and one asymmetrical bending load case. We conclude that our model, using distortional and dilatational strains as determining stimuli, is able to simulate fracture-healing processes not only under axial compression and torsional rotation but also under translational shear and asymmetrical bending loading conditions.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wang, Jianhui; Liu, Zhi; Chen, C L Philip; Zhang, Yun
2017-10-12
Hysteresis exists ubiquitously in physical actuators. Besides, actuator failures/faults may also occur in practice. Both effects would deteriorate the transient tracking performance, and even trigger instability. In this paper, we consider the problem of compensating for actuator failures and input hysteresis by proposing a fuzzy control scheme for stochastic nonlinear systems. Compared with the existing research on stochastic nonlinear uncertain systems, it is found that how to guarantee a prescribed transient tracking performance when taking into account actuator failures and hysteresis simultaneously also remains to be answered. Our proposed control scheme is designed on the basis of the fuzzy logic system and backstepping techniques for this purpose. It is proven that all the signals remain bounded and the tracking error is ensured to be within a preestablished bound with the failures of hysteretic actuator. Finally, simulations are provided to illustrate the effectiveness of the obtained theoretical results.
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
NASA Astrophysics Data System (ADS)
Léchappé, V.; Moulay, E.; Plestan, F.
2018-06-01
The stability of a prediction-based controller for linear time-invariant (LTI) systems is studied in the presence of time-varying input and output delays. The uncertain delay case is treated as well as the partial state knowledge case. The reduction method is used in order to prove the convergence of the closed-loop system including the state observer, the predictor and the plant. Explicit conditions that guarantee the closed-loop stability are given, thanks to a Lyapunov-Razumikhin analysis. Simulations illustrate the theoretical results.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yanlong
2017-09-01
Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
NASA Technical Reports Server (NTRS)
Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her
1990-01-01
Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.
Command Filtering-Based Fuzzy Control for Nonlinear Systems With Saturation Input.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Lin, Chong
2017-09-01
In this paper, command filtering-based fuzzy control is designed for uncertain multi-input multioutput (MIMO) nonlinear systems with saturation nonlinearity input. First, the command filtering method is employed to deal with the explosion of complexity caused by the derivative of virtual controllers. Then, fuzzy logic systems are utilized to approximate the nonlinear functions of MIMO systems. Furthermore, error compensation mechanism is introduced to overcome the drawback of the dynamics surface approach. The developed method will guarantee all signals of the systems are bounded. The effectiveness and advantages of the theoretic result are obtained by a simulation example.
Effects of uncertain topographic input data on two-dimensional flow modeling in a gravel-bed river
Legleiter, C.J.; Kyriakidis, P.C.; McDonald, R.R.; Nelson, J.M.
2011-01-01
Many applications in river research and management rely upon two-dimensional (2D) numerical models to characterize flow fields, assess habitat conditions, and evaluate channel stability. Predictions from such models are potentially highly uncertain due to the uncertainty associated with the topographic data provided as input. This study used a spatial stochastic simulation strategy to examine the effects of topographic uncertainty on flow modeling. Many, equally likely bed elevation realizations for a simple meander bend were generated and propagated through a typical 2D model to produce distributions of water-surface elevation, depth, velocity, and boundary shear stress at each node of the model's computational grid. Ensemble summary statistics were used to characterize the uncertainty associated with these predictions and to examine the spatial structure of this uncertainty in relation to channel morphology. Simulations conditioned to different data configurations indicated that model predictions became increasingly uncertain as the spacing between surveyed cross sections increased. Model sensitivity to topographic uncertainty was greater for base flow conditions than for a higher, subbankfull flow (75% of bankfull discharge). The degree of sensitivity also varied spatially throughout the bend, with the greatest uncertainty occurring over the point bar where the flow field was influenced by topographic steering effects. Uncertain topography can therefore introduce significant uncertainty to analyses of habitat suitability and bed mobility based on flow model output. In the presence of such uncertainty, the results of these studies are most appropriately represented in probabilistic terms using distributions of model predictions derived from a series of topographic realizations. Copyright 2011 by the American Geophysical Union.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
NASA Astrophysics Data System (ADS)
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
A probabilistic approach to emissions from transportation sector in the coming decades
NASA Astrophysics Data System (ADS)
Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.
2010-12-01
Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.
Modeling Vegetation Growth Impact on Groundwater Recharge
NASA Astrophysics Data System (ADS)
Anurag, H.; Ng, G. H. C.; Tipping, R.
2017-12-01
Vegetation growth is affected by variability in climate and land-cover / land-use over a range of temporal and spatial scales. Vegetation also modifies water budget through interception and evapotranspiration and thus has a significant impact on groundwater recharge. Most groundwater recharge assessments represent vegetation using specified, static parameter, such as for leaf-area-index, but this neglects the effect of vegetation dynamics on recharge estimates. Our study addresses this gap by including vegetation growth in model simulations of recharge. We use NCAR's Community Land Model v4.5 with its BGC module (BGC is the new CLM4.5 biogeochemistry). It integrates prognostic vegetation growth with land-surface and subsurface hydrological processes and can thus capture the effect of vegetation on groundwater. A challenge, however, is the need to resolve uncertainties in model inputs ranging from vegetation growth parameters all the way down to the water table. We have compiled diverse data spanning meteorological inputs to subsurface geology and use these to implement ensemble model simulations to evaluate the possible effects of dynamic vegetation growth (versus specified, static vegetation parameterizations) on estimating groundwater recharge. We present preliminary results for select data-intensive test locations throughout the state of Minnesota (USA), which has a sharp east-west precipitation gradient that makes it an apt testbed for examining ecohydrologic relationships across different temperate climatic settings and ecosystems. Using the ensemble simulations, we examine the effect of seasonal to interannual variability of vegetation growth on recharge and water table depths, which has implications for predicting the combined impact of climate, vegetation, and geology on groundwater resources. Future work will include distributed model simulations over the entire state, as well as conditioning uncertain vegetation and subsurface parameters on remote sensing data and statewide water table records using data assimilation.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.
2008-12-01
The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.
Switching State-Feedback LPV Control with Uncertain Scheduling Parameters
NASA Technical Reports Server (NTRS)
He, Tianyi; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming G.
2017-01-01
This paper presents a new method to design Robust Switching State-Feedback Gain-Scheduling (RSSFGS) controllers for Linear Parameter Varying (LPV) systems with uncertain scheduling parameters. The domain of scheduling parameters are divided into several overlapped subregions to undergo hysteresis switching among a family of simultaneously designed LPV controllers over the corresponding subregion with the guaranteed H-infinity performance. The synthesis conditions are given in terms of Parameterized Linear Matrix Inequalities that guarantee both stability and performance at each subregion and associated switching surfaces. The switching stability is ensured by descent parameter-dependent Lyapunov function on switching surfaces. By solving the optimization problem, RSSFGS controller can be obtained for each subregion. A numerical example is given to illustrate the effectiveness of the proposed approach over the non-switching controllers.
QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices
NASA Technical Reports Server (NTRS)
Hess, R. A.; Henderson, D. K.
1996-01-01
A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Singh, A.; Behrangi, A.; Fisher, J.; Reager, J. T., II; Gardner, A. S.
2017-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Shukla, S.; Hobbins, M.; McEvoy, D.; Husak, G. J.; Dewes, C.; McNally, A.; Huntington, J. L.; Funk, C. C.; Verdin, J. P.
2016-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
Finite-time master-slave synchronization and parameter identification for uncertain Lurie systems.
Wang, Tianbo; Zhao, Shouwei; Zhou, Wuneng; Yu, Weiqin
2014-07-01
This paper investigates the finite-time master-slave synchronization and parameter identification problem for uncertain Lurie systems based on the finite-time stability theory and the adaptive control method. The finite-time master-slave synchronization means that the state of a slave system follows with that of a master system in finite time, which is more reasonable than the asymptotical synchronization in applications. The uncertainties include the unknown parameters and noise disturbances. An adaptive controller and update laws which ensures the synchronization and parameter identification to be realized in finite time are constructed. Finally, two numerical examples are given to show the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
How uncertain is model-based prediction of copper loads in stormwater runoff?
Lindblom, E; Ahlman, S; Mikkelsen, P S
2007-01-01
In this paper, we conduct a systematic analysis of the uncertainty related with estimating the total load of pollution (copper) from a separate stormwater drainage system, conditioned on a specific combination of input data, a dynamic conceptual pollutant accumulation-washout model and measurements (runoff volumes and pollutant masses). We use the generalized likelihood uncertainty estimation (GLUE) methodology and generate posterior parameter distributions that result in model outputs encompassing a significant number of the highly variable measurements. Given the applied pollution accumulation-washout model and a total of 57 measurements during one month, the total predicted copper masses can be predicted within a range of +/-50% of the median value. The message is that this relatively large uncertainty should be acknowledged in connection with posting statements about micropollutant loads as estimated from dynamic models, even when calibrated with on-site concentration data.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Synchronization between uncertain nonidentical networks with quantum chaotic behavior
NASA Astrophysics Data System (ADS)
Li, Wenlin; Li, Chong; Song, Heshan
2016-11-01
Synchronization between uncertain nonidentical networks with quantum chaotic behavior is researched. The identification laws of unknown parameters in state equations of network nodes, the adaptive laws of configuration matrix elements and outer coupling strengths are determined based on Lyapunov theorem. The conditions of realizing synchronization between uncertain nonidentical networks are discussed and obtained. Further, Jaynes-Cummings model in physics are taken as the nodes of two networks and simulation results show that the synchronization performance between networks is very stable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Single-axis gyroscopic motion with uncertain angular velocity about spin axis
NASA Technical Reports Server (NTRS)
Singh, S. N.
1977-01-01
A differential game approach is presented for studying the response of a gyro by treating the controlled angular velocity about the input axis as the evader, and the bounded but uncertain angular velocity about the spin axis as the pursuer. When the uncertain angular velocity about the spin axis desires to force the gyro to saturation a differential game problem with two terminal surfaces results, whereas when the evader desires to attain the equilibrium state the usual game with single terminal manifold arises. A barrier, delineating the capture zone (CZ) in which the gyro can attain saturation and the escape zone (EZ) in which the evader avoids saturation is obtained. The CZ is further delineated into two subregions such that the states in each subregion can be forced on a definite target manifold. The application of the game theoretic approach to Control Moment Gyro is briefly discussed.
NASA Astrophysics Data System (ADS)
Liu, Zhengmin; Liu, Peide
2017-04-01
The Bonferroni mean (BM) was originally introduced by Bonferroni and generalised by many other researchers due to its capacity to capture the interrelationship between input arguments. Nevertheless, in many situations, interrelationships do not always exist between all of the attributes. Attributes can be partitioned into several different categories and members of intra-partition are interrelated while no interrelationship exists between attributes of different partitions. In this paper, as complements to the existing generalisations of BM, we investigate the partitioned Bonferroni mean (PBM) under intuitionistic uncertain linguistic environments and develop two linguistic aggregation operators: intuitionistic uncertain linguistic partitioned Bonferroni mean (IULPBM) and its weighted form (WIULPBM). Then, motivated by the ideal of geometric mean and PBM, we further present the partitioned geometric Bonferroni mean (PGBM) and develop two linguistic geometric aggregation operators: intuitionistic uncertain linguistic partitioned geometric Bonferroni mean (IULPGBM) and its weighted form (WIULPGBM). Some properties and special cases of these proposed operators are also investigated and discussed in detail. Based on these operators, an approach for multiple attribute decision-making problems with intuitionistic uncertain linguistic information is developed. Finally, a practical example is presented to illustrate the developed approach and comparison analyses are conducted with other representative methods to verify the effectiveness and feasibility of the developed approach.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Regional and national significance of biological nitrogen fixation by crops in the United States
Background/Questions/Methods Biological nitrogen fixation by crops (C-BNF) represents one of the largest anthropogenic inputs of reactive nitrogen (N) to land surfaces around the world. In the United States (US), existing estimates of C-BNF are uncertain because of incomplete o...
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol
2008-01-01
This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.
Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Laboratory Simulations of Micrometeoroid Ablation
NASA Astrophysics Data System (ADS)
Thomas, Evan Williamson
Each day, several tons of meteoric material enters Earth's atmosphere, the majority of which consist of small dust particles (micrometeoroids) that completely ablate at high altitudes. The dust input has been suggested to play a role in a variety of phenomena including: layers of metal atoms and ions, nucleation of noctilucent clouds, effects on stratospheric aerosols and ozone chemistry, and the fertilization of the ocean with bio-available iron. Furthermore, a correct understanding of the dust input to the Earth provides constraints on inner solar system dust models. Various methods are used to measure the dust input to the Earth including satellite detectors, radar, lidar, rocket-borne detectors, ice core and deep-sea sediment analysis. However, the best way to interpret each of these measurements is uncertain, which leads to large uncertainties in the total dust input. To better understand the ablation process, and thereby reduce uncertainties in micrometeoroid ablation measurements, a facility has been developed to simulate the ablation of micrometeoroids in laboratory conditions. An electrostatic dust accelerator is used to accelerate iron particles to relevant meteoric velocities (10-70 km/s). The particles are then introduced into a chamber pressurized with a target gas, and they partially or completely ablate over a short distance. An array of diagnostics then measure, with timing and spatial resolution, the charge and light that is generated in the ablation process. In this thesis, we present results from the newly developed ablation facility. The ionization coefficient, an important parameter for interpreting meteor radar measurements, is measured for various target gases. Furthermore, experimental ablation measurements are compared to predictions from commonly used ablation models. In light of these measurements, implications to the broader context of meteor ablation are discussed.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, R.; Hong, Seungkyu K.; Kwon, Hyoung-Ahn
We used a 3-D regional atmospheric chemistry transport model (WRF-Chem) to examine processes that determine O3 in East Asia; in particular, we focused on O3 dry deposition, which is an uncertain research area due to insufficient observation and numerical studies in East Asia. Here, we compare two widely used dry deposition parameterization schemes, Wesely and M3DRY, which are used in the WRF-Chem and CMAQ models, respectively. The O3 dry deposition velocities simulated using the two aforementioned schemes under identical meteorological conditions show considerable differences (a factor of 2) due to surface resistance parameterization discrepancies. The O3 concentration differed by upmore » to 10 ppbv for the monthly mean. The simulated and observed dry deposition velocities were compared, which showed that the Wesely scheme model is consistent with the observations and successfully reproduces the observed diurnal variation. We conduct several sensitivity simulations by changing the land use data, the surface resistance of the water and the model’s spatial resolution to examine the factors that affect O3 concentrations in East Asia. As shown, the model was considerably sensitive to the input parameters, which indicates a high uncertainty for such O3 dry deposition simulations. Observations are necessary to constrain the dry deposition parameterization and input data to improve the East Asia air quality models.« less
NASA Astrophysics Data System (ADS)
Truong, Bui Ngoc Minh; Nam, Doan Ngoc Chi; Ahn, Kyoung Kwan
2013-09-01
Dielectric electro-active polymer (DEAP) materials are attractive since they are low cost, lightweight and have a large deformation capability. They have no operating noise, very low electric power consumption and higher performance and efficiency than competing technologies. However, DEAP materials generally have strong hysteresis as well as uncertain and nonlinear characteristics. These disadvantages can limit the efficiency in the use of DEAP materials. To address these limitations, this research will present the combination of the Preisach model and the dynamic nonlinear autoregressive exogenous (NARX) fuzzy model-based adaptive particle swarm optimization (APSO) identification algorithm for modeling and identification of the nonlinear behavior of one typical type of DEAP actuator. Firstly, open loop input signals are applied to obtain nonlinear features and to investigate the responses of the DEAP actuator system. Then, a Preisach model can be combined with a dynamic NARX fuzzy structure to estimate the tip displacement of a DEAP actuator. To optimize all unknown parameters of the designed combination, an identification scheme based on a least squares method and an APSO algorithm is carried out. Finally, experimental validation research is carefully completed, and the effectiveness of the proposed model is evaluated by employing various input signals.
Dynamic learning from adaptive neural network control of a class of nonaffine nonlinear systems.
Dai, Shi-Lu; Wang, Cong; Wang, Min
2014-01-01
This paper studies the problem of learning from adaptive neural network (NN) control of a class of nonaffine nonlinear systems in uncertain dynamic environments. In the control design process, a stable adaptive NN tracking control design technique is proposed for the nonaffine nonlinear systems with a mild assumption by combining a filtered tracking error with the implicit function theorem, input-to-state stability, and the small-gain theorem. The proposed stable control design technique not only overcomes the difficulty in controlling nonaffine nonlinear systems but also relaxes constraint conditions of the considered systems. In the learning process, the partial persistent excitation (PE) condition of radial basis function NNs is satisfied during tracking control to a recurrent reference trajectory. Under the PE condition and an appropriate state transformation, the proposed adaptive NN control is shown to be capable of acquiring knowledge on the implicit desired control input dynamics in the stable control process and of storing the learned knowledge in memory. Subsequently, an NN learning control design technique that effectively exploits the learned knowledge without re-adapting to the controller parameters is proposed to achieve closed-loop stability and improved control performance. Simulation studies are performed to demonstrate the effectiveness of the proposed design techniques.
Estimates of galactic cosmic ray shielding requirements during solar minimum
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.
1990-01-01
Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.
Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic
NASA Astrophysics Data System (ADS)
Haag, T.; Herrmann, J.; Hanss, M.
2010-10-01
For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Sarhadi, Pouria; Noei, Abolfazl Ranjbar; Khosravi, Alireza
2016-11-01
Input saturations and uncertain dynamics are among the practical challenges in control of autonomous vehicles. Adaptive control is known as a proper method to deal with the uncertain dynamics of these systems. Therefore, incorporating the ability to confront with input saturation in adaptive controllers can be valuable. In this paper, an adaptive autopilot is presented for the pitch and yaw channels of an autonomous underwater vehicle (AUV) in the presence of input saturations. This will be performed by combination of a model reference adaptive control (MRAC) with integral state feedback with a modern anti-windup (AW) compensator. MRAC with integral state feedback is commonly used in autonomous vehicles. However, some proper modifications need to be taken into account in order to cope with the saturation problem. To this end, a Riccati-based anti-windup (AW) compensator is employed. The presented technique is applied to the non-linear six degrees of freedom (DOF) model of an AUV and the obtained results are compared with that of its baseline method. Several simulation scenarios are executed in the pitch and yaw channels to evaluate the controller performance. Moreover, effectiveness of proposed adaptive controller is comprehensively investigated by implementing Monte Carlo simulations. The obtained results verify the performance of proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes
NASA Astrophysics Data System (ADS)
Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.
2017-11-01
Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.
Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.
Mazandarani, Mehran; Pariz, Naser
2018-05-01
This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
Echolalic Responses by a Child with Autism to Four Experimental Conditions of Sociolinguistic Input.
ERIC Educational Resources Information Center
Violette, Joseph; Swisher, Linda
1992-01-01
The immediate verbal imitations (IVIs) of a boy (age five) with autism and echolalia were studied, with variables of linguistic familiarity and instructor's style of directiveness being manipulated. The occurrence of IVIs was related to uncertain or informative events, and was significantly greater when lexical stimuli were unknown and presented…
Development System for Flexible Assembly System.
1986-02-01
in( tho .Iho Iacli: that is, the estimated so’arae hows extrene senisitivity t Ihe t’rr ,f ,rii It: input angles in the vacinity of a pole. These...investigating is to prerotate the world frame so that none of the uncertain transformations have nominal angles in the vacinity of a pole. 17 %% ’ ,’f
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system
NASA Astrophysics Data System (ADS)
Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping
2017-12-01
This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.
On the robust optimization to the uncertain vaccination strategy problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.
2016-03-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.
2015-11-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Hypersonic Vehicle Trajectory Optimization and Control
NASA Technical Reports Server (NTRS)
Balakrishnan, S. N.; Shen, J.; Grohs, J. R.
1997-01-01
Two classes of neural networks have been developed for the study of hypersonic vehicle trajectory optimization and control. The first one is called an 'adaptive critic'. The uniqueness and main features of this approach are that: (1) they need no external training; (2) they allow variability of initial conditions; and (3) they can serve as feedback control. This is used to solve a 'free final time' two-point boundary value problem that maximizes the mass at the rocket burn-out while satisfying the pre-specified burn-out conditions in velocity, flightpath angle, and altitude. The second neural network is a recurrent network. An interesting feature of this network formulation is that when its inputs are the coefficients of the dynamics and control matrices, the network outputs are the Kalman sequences (with a quadratic cost function); the same network is also used for identifying the coefficients of the dynamics and control matrices. Consequently, we can use it to control a system whose parameters are uncertain. Numerical results are presented which illustrate the potential of these methods.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
NASA Astrophysics Data System (ADS)
Hassan Asemani, Mohammad; Johari Majd, Vahid
2015-12-01
This paper addresses a robust H∞ fuzzy observer-based tracking design problem for uncertain Takagi-Sugeno fuzzy systems with external disturbances. To have a practical observer-based controller, the premise variables of the system are assumed to be not measurable in general, which leads to a more complex design process. The tracker is synthesised based on a fuzzy Lyapunov function approach and non-parallel distributed compensation (non-PDC) scheme. Using the descriptor redundancy approach, the robust stability conditions are derived in the form of strict linear matrix inequalities (LMIs) even in the presence of uncertainties in the system, input, and output matrices simultaneously. Numerical simulations are provided to show the effectiveness of the proposed method.
Transient Stability Assessment of Power Systems With Uncertain Renewable Generation: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villegas Pico, Hugo Nestor; Aliprantis, Dionysios C.; Lin, Xiaojun
2017-08-09
The transient stability of a power system depends heavily on its operational state at the moment of a fault. In systems where the penetration of renewable generation is significant, the dispatch of the conventional fleet of synchronous generators is uncertain at the time of dynamic security analysis. Hence, the assessment of transient stability requires the solution of a system of nonlinear ordinary differential equations with unknown initial conditions and inputs. To this end, we set forth a computational framework that relies on Taylor polynomials, where variables are associated with the level of renewable generation. This paper describes the details ofmore » the method and illustrates its application on a nine-bus test system.« less
Efficient Portfolios of the Energy Technologies
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2011-09-01
The goal of the research is to apply the methods of Portfolio Theory to a set of technologies instead of to a set of securities on a stock market (as it is the case in the original model). Assets on the stock market are objects that have risk and return, parameters that depend on uncertain factors and thus are uncertain. The returns from the use of technologies also depend on uncertain factors and thus each technology has a certain amount of risk. The simultaneous use of technologies could diversify the risks that are associated with technologies just the same way as diversification works on the stock market.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Sensation, mechanoreceptor, and nerve fiber function after nerve regeneration.
Krarup, Christian; Rosén, Birgitta; Boeckstyns, Michel; Ibsen Sørensen, Allan; Lundborg, Göran; Moldovan, Mihai; Archibald, Simon J
2017-12-01
Sensation is essential for recovery after peripheral nerve injury. However, the relationship between sensory modalities and function of regenerated fibers is uncertain. We have investigated the relationships between touch threshold, tactile gnosis, and mechanoreceptor and sensory fiber function after nerve regeneration. Twenty-one median or ulnar nerve lesions were repaired by a collagen nerve conduit or direct suture. Quantitative sensory hand function and sensory conduction studies by near-nerve technique, including tactile stimulation of mechanoreceptors, were followed for 2 years, and results were compared to noninjured hands. At both repair methods, touch thresholds at the finger tips recovered to 81 ± 3% and tactile gnosis only to 20 ± 4% (p < 0.001) of control. The sensory nerve action potentials (SNAPs) remained dispersed and areas recovered to 23 ± 2% and the amplitudes only to 7 ± 1% (P < 0.001). The areas of SNAPs after tactile stimulation recovered to 61 ± 11% and remained slowed. Touch sensation correlated with SNAP areas (p < 0.005) and was negatively related to the prolongation of tactile latencies (p < 0.01); tactile gnosis was not related to electrophysiological parameters. The recovered function of regenerated peripheral nerve fibers and reinnervated mechanoreceptors may differentially influence recovery of sensory modalities. Touch was affected by the number and function of regenerated fibers and mechanoreceptors. In contrast, tactile gnosis depends on the input and plasticity of the central nervous system (CNS), which may explain the absence of a direct relation between electrophysiological parameters and poor recovery. Dispersed maturation of sensory nerve fibers with desynchronized inputs to the CNS also contributes to the poor recovery of tactile gnosis. Ann Neurol 2017. Ann Neurol 2017;82:940-950. © 2017 American Neurological Association.
Global direct radiative forcing by process-parameterized aerosol optical properties
NASA Astrophysics Data System (ADS)
KirkevâG, Alf; Iversen, Trond
2002-10-01
A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Jason; Winkler, Jon
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
Woods, Jason; Winkler, Jon
2018-01-31
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Olsson, Oliver; Höper, Heinrich; Hamscher, Gerd; Kümmerer, Klaus
2016-04-01
The simulation of reactive transport in the aquatic environment is hampered by the ambiguity of environmental fate process conceptualizations for a specific substance in the literature. Concepts are usually identified by experimental studies and inverse modelling under controlled lab conditions in order to reduce environmental uncertainties such as uncertain boundary conditions and input data. However, since environmental conditions affect substance behaviour, a re-evaluation might be necessary under environmental conditions which might, in turn, be affected by uncertainties. Using a combination of experimental data and simulations of the leaching behaviour of the veterinary antibiotic Sulfamethazine (SMZ; synonym: sulfadimidine) and the hydrological tracer Bromide (Br) in a field lysimeter, we re-evaluated the sorption concepts of both substances under uncertain field conditions. Sampling data of a field lysimeter experiment in which both substances were applied twice a year with manure and sampled at the bottom of two lysimeters during three subsequent years was used for model set-up and evaluation. The total amount of leached SMZ and Br were 22 μg and 129 mg, respectively. A reactive transport model was parameterized to the conditions of the two lysimeters filled with monoliths (depth 2 m, area 1 m²) of a sandy soil showing a low pH value under which Bromide is sorptive. We used different sorption concepts such as constant and organic-carbon dependent sorption coefficients and instantaneous and kinetic sorption equilibrium. Combining the sorption concepts resulted in four scenarios per substance with different equations for sorption equilibrium and sorption kinetics. The GLUE (Generalized Likelihood Uncertainty Estimation) method was applied to each scenario using parameter ranges found in experimental and modelling studies. The parameter spaces for each scenario were sampled using a Latin Hypercube method which was refined around local model efficiency maxima. Results of the cumulative SMZ leaching simulations suggest a best conceptualization combination of instantaneous sorption to organic carbon which is consistent with the literature. The best Nash-Sutcliffe efficiency (Neff) was 0.96 and the 5th and 95th percentile of the uncertainty estimation were 18 and 27 μg. In contrast, both scenarios of kinetic Br sorption had similar results (Neff =0.99, uncertainty bounds 110-176 mg and 112-176 mg) but were clearly better than instantaneous sorption scenarios. Therefore, only the concept of sorption kinetics could be identified for Br modelling whereas both tested sorption equilibrium coefficient concepts performed equally well. The reasons for this specific case of equifinality may be uncertainties of model input data under field conditions or an insensitivity of the sorption equilibrium method due to relatively low adsorption of Br. Our results show that it may be possible to identify or at least falsify specific sorption concepts under uncertain field conditions using a long-term leaching experiment and modelling methods. Cases of environmental fate concept equifinality arouse the possibility of future model structure uncertainty analysis using an ensemble of models with different environmental fate concepts.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian
2018-01-01
A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.
Robust control of seismically excited cable stayed bridges with MR dampers
NASA Astrophysics Data System (ADS)
YeganehFallah, Arash; Khajeh Ahamd Attari, Nader
2017-03-01
In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.
Soil warming response: field experiments to Earth system models
NASA Astrophysics Data System (ADS)
Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.
2017-12-01
The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Adaptive identifier for uncertain complex nonlinear systems based on continuous neural networks.
Alfaro-Ponce, Mariel; Cruz, Amadeo Argüelles; Chairez, Isaac
2014-03-01
This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain.
Multilateral Telecoordinated Control of Multiple Robots With Uncertain Kinematics.
Zhai, Di-Hua; Xia, Yuanqing
2017-06-06
This paper addresses the telecoordinated control of multiple robots in the simultaneous presence of asymmetric time-varying delays, nonpassive external forces, and uncertain kinematics/dynamics. To achieve the control objective, a neuroadaptive controller with utilizing prescribed performance control and switching control technique is developed, where the basic idea is to employ the concept of motion synchronization in each pair of master-slave robots and among all slave robots. By using the multiple Lyapunov-Krasovskii functionals method, the state-independent input-to-output practical stability of the closed-loop system is established. Compared with the previous approaches, the new design is straightforward and easier to implement and is applicable to a wider area. Simulation results on three pairs of three degrees-of-freedom robots confirm the theoretical findings.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Ecosystem carbon storage and flux in upland/peatland watersheds in northern Minnesota. Chapter 9.
David F. Grigal; Peter C. Bates; Randall K. Kolka
2011-01-01
Carbon (C) storage and fluxes (inputs and outputs of C per unit time) are central issues in global change. Spatial patterns of C storage on the landscape, both that in soil and in biomass, are important from an inventory perspective and for understanding the biophysical processes that affect C fluxes. Regional and national estimates of C storage are uncertain because...
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
Water and solute mass balance of five small, relatively undisturbed watersheds in the U.S.
Peters, N.E.; Shanley, J.B.; Aulenbach, Brent T.; Webb, R.M.; Campbell, D.H.; Hunt, R.; Larsen, M.C.; Stallard, R.F.; Troester, J.; Walker, J.F.
2006-01-01
Geochemical mass balances were computed for water years 1992-1997 (October 1991 through September 1997) for the five watersheds of the U.S. Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) Program to determine the primary regional controls on yields of the major dissolved inorganic solutes. The sites, which vary markedly with respect to climate, geology, physiography, and ecology, are: Allequash Creek, Wisconsin (low-relief, humid continental forest); Andrews Creek, Colorado (cold alpine, taiga/tundra, and subalpine boreal forest); Ri??o Icacos, Puerto Rico (lower montane, wet tropical forest); Panola Mountain, Georgia (humid subtropical piedmont forest); and Sleepers River, Vermont (humid northern hardwood forest). Streamwater output fluxes were determined by constructing empirical multivariate concentration models including discharge and seasonal components. Input fluxes were computed from weekly wet-only or bulk precipitation sampling. Despite uncertainties in input fluxes arising from poorly defined elevation gradients, lack of dry-deposition and occult-deposition measurements, and uncertain sea-salt contributions, the following was concluded: (1) for solutes derived primarily from rock weathering (Ca, Mg, Na, K, and H4SiO4), net fluxes (outputs in streamflow minus inputs in deposition) varied by two orders of magnitude, which is attributed to a large gradient in rock weathering rates controlled by climate and geologic parent material; (2) the net flux of atmospherically derived solutes (NH4, NO3, SO4, and Cl) was similar among sites, with SO4 being the most variable and NH4 and NO3 generally retained (except for NO 3 at Andrews); and (3) relations among monthly solute fluxes and differences among solute concentration model parameters yielded additional insights into comparative biogeochemical processes at the sites. ?? 2005 Elsevier B.V. All rights reserved.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
Pérez-López, Paula; Montazeri, Mahdokht; Feijoo, Gumersindo; Moreira, María Teresa; Eckelman, Matthew J
2018-06-01
The economic and environmental performance of microalgal processes has been widely analyzed in recent years. However, few studies propose an integrated process-based approach to evaluate economic and environmental indicators simultaneously. Biodiesel is usually the single product and the effect of environmental benefits of co-products obtained in the process is rarely discussed. In addition, there is wide variation of the results due to inherent variability of some parameters as well as different assumptions in the models and limited knowledge about the processes. In this study, two standardized models were combined to provide an integrated simulation tool allowing the simultaneous estimation of economic and environmental indicators from a unique set of input parameters. First, a harmonized scenario was assessed to validate the joint environmental and techno-economic model. The findings were consistent with previous assessments. In a second stage, a Monte Carlo simulation was applied to evaluate the influence of variable and uncertain parameters in the model output, as well as the correlations between the different outputs. The simulation showed a high probability of achieving favorable environmental performance for the evaluated categories and a minimum selling price ranging from $11gal -1 to $106gal -1 . Greenhouse gas emissions and minimum selling price were found to have the strongest positive linear relationship, whereas eutrophication showed weak correlations with the other indicators (namely greenhouse gas emissions, cumulative energy demand and minimum selling price). Process parameters (especially biomass productivity and lipid content) were the main source of variation, whereas uncertainties linked to the characterization methods and economic parameters had limited effect on the results. Copyright © 2018 Elsevier B.V. All rights reserved.
Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.
Singh, Raj Mohan
2008-04-01
Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.
Zheng, Song
2015-09-01
In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessel (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Sometimes lifetime testing is performed on an actual COPV in service in an effort to validate the reliability model that is the basis for certifying the continued flight worthiness of its sisters. Currently, testing of such a Kevlar49(registered TradeMark)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the data base and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one nine , that is, reducing the probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several would be necessary.
Otitis Media with Effusion: Its Significance in the Deaf Student.
1982-06-01
Otitis media with effusion currently ranks as the most common cause of hearing loss in children of preschool and school age. Otitis media with...makes the difference between usable auditory input and useless noise. The etiology of otitis media with effusion is uncertain. Its educational...paper explores the extent of otitis media with effusion, its effects, what methods are available for detection, current and future methods of medical
Optimal second order sliding mode control for linear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-11-01
In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.
Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem
2015-03-01
This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Big bang nucleosynthesis revisited via Trojan Horse method measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pizzone, R. G.; Spartá, R.; Spitaleri, C.
Nuclear reaction rates are among the most important input for understanding primordial nucleosynthesis and, therefore, for a quantitative description of the early universe. An up-to-date compilation of direct cross-sections of {sup 2}H(d, p){sup 3}H, {sup 2}H(d, n){sup 3}He, {sup 7}Li(p, α){sup 4}He, and {sup 3}He(d, p){sup 4}He reactions is given. These are among the most uncertain cross-sections used and input for big bang nucleosynthesis calculations. Their measurements through the Trojan Horse method are also reviewed and compared with direct data. The reaction rates and the corresponding recommended errors in this work were used as input for primordial nucleosynthesis calculations tomore » evaluate their impact on the {sup 2}H, {sup 3,4}He, and {sup 7}Li primordial abundances, which are then compared with observations.« less
Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.
Chen, Ziting; Li, Zhijun; Chen, C L Philip
2017-06-01
An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
Second-order sliding mode control with experimental application.
Eker, Ilyas
2010-07-01
In this article, a second-order sliding mode control (2-SMC) is proposed for second-order uncertain plants using equivalent control approach to improve the performance of control systems. A Proportional + Integral + Derivative (PID) sliding surface is used for the sliding mode. The sliding mode control law is derived using direct Lyapunov stability approach and asymptotic stability is proved theoretically. The performance of the closed-loop system is analysed through an experimental application to an electromechanical plant to show the feasibility and effectiveness of the proposed second-order sliding mode control and factors involved in the design. The second-order plant parameters are experimentally determined using input-output measured data. The results of the experimental application are presented to make a quantitative comparison with the traditional (first-order) sliding mode control (SMC) and PID control. It is demonstrated that the proposed 2-SMC system improves the performance of the closed-loop system with better tracking specifications in the case of external disturbances, better behavior of the output and faster convergence of the sliding surface while maintaining the stability. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.
Pan, Yongping; Yu, Haoyong
2017-06-01
This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.
On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza
2015-12-01
In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.
Uncertainty in simulating wheat yields under climate change
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.
2013-09-01
Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Robust linear quadratic designs with respect to parameter uncertainty
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1992-01-01
The authors derive a linear quadratic regulator (LQR) which is robust to parametric uncertainty by using the overbounding method of I. R. Petersen and C. V. Hollot (1986). The resulting controller is determined from the solution of a single modified Riccati equation. It is shown that, when applied to a structural system, the controller gains add robustness by minimizing the potential energy of uncertain stiffness elements, and minimizing the rate of dissipation of energy through uncertain damping elements. A worst-case disturbance in the direction of the uncertainty is also considered. It is proved that performance robustness has been increased with the robust LQR when compared to a mismatched LQR design where the controller is designed on the nominal system, but applied to the actual uncertain system.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Astrophysics Data System (ADS)
Shishebori, Davood; Babadi, Abolghasem Yousefi
2018-03-01
This study investigates the reliable multi-configuration capacitated logistics network design problem (RMCLNDP) under system disturbances, which relates to locating facilities, establishing transportation links, and also allocating their limited capacities to the customers conducive to provide their demand on the minimum expected total cost (including locating costs, link constructing costs, and also expected costs in normal and disturbance conditions). In addition, two types of risks are considered; (I) uncertain environment, (II) system disturbances. A two-level mathematical model is proposed for formulating of the mentioned problem. Also, because of the uncertain parameters of the model, an efficacious possibilistic robust optimization approach is utilized. To evaluate the model, a drug supply chain design (SCN) is studied. Finally, an extensive sensitivity analysis was done on the critical parameters. The obtained results show that the efficiency of the proposed approach is suitable and is worthwhile for analyzing the real practical problems.
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.
2010-12-01
Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
Probabilistic Radiological Performance Assessment Modeling and Uncertainty
NASA Astrophysics Data System (ADS)
Tauxe, J.
2004-12-01
A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A practitioner could, however, start with this model as a GoldSim template and, by adding site specific features and parameter values (distributions), use this model as a starting point for a real model to be used in real decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mink, S. E. de; Belczynski, K., E-mail: S.E.deMink@uva.nl, E-mail: kbelczyn@astrouw.edu.pl
The initial mass function (IMF), binary fraction, and distributions of binary parameters (mass ratios, separations, and eccentricities) are indispensable inputs for simulations of stellar populations. It is often claimed that these are poorly constrained, significantly affecting evolutionary predictions. Recently, dedicated observing campaigns have provided new constraints on the initial conditions for massive stars. Findings include a larger close binary fraction and a stronger preference for very tight systems. We investigate the impact on the predicted merger rates of neutron stars and black holes. Despite the changes with previous assumptions, we only find an increase of less than a factor ofmore » 2 (insignificant compared with evolutionary uncertainties of typically a factor of 10–100). We further show that the uncertainties in the new initial binary properties do not significantly affect (within a factor of 2) our predictions of double compact object merger rates. An exception is the uncertainty in IMF (variations by a factor of 6 up and down). No significant changes in the distributions of final component masses, mass ratios, chirp masses, and delay times are found. We conclude that the predictions are, for practical purposes, robust against uncertainties in the initial conditions concerning binary parameters, with the exception of the IMF. This eliminates an important layer of the many uncertain assumptions affecting the predictions of merger detection rates with the gravitational wave detectors aLIGO/aVirgo.« less
Probabilistic margin evaluation on accidental transients for the ASTRID reactor project
NASA Astrophysics Data System (ADS)
Marquès, Michel
2014-06-01
ASTRID is a technological demonstrator of Sodium cooled Fast Reactor (SFR) under development. The conceptual design studies are being conducted in accordance with the Generation IV reactor objectives, particularly in terms of improving safety. For the hypothetical events, belonging to the accidental category "severe accident prevention situations" having a very low frequency of occurrence, the safety demonstration is no more based on a deterministic demonstration with conservative assumptions on models and parameters but on a "Best-Estimate Plus Uncertainty" (BEPU) approach. This BEPU approach ispresented in this paper for an Unprotected Loss-of-Flow (ULOF) event. The Best-Estimate (BE) analysis of this ULOFt ransient is performed with the CATHARE2 code, which is the French reference system code for SFR applications. The objective of the BEPU analysis is twofold: first evaluate the safety margin to sodium boiling in taking into account the uncertainties on the input parameters of the CATHARE2 code (twenty-two uncertain input parameters have been identified, which can be classified into five groups: reactor power, accident management, pumps characteristics, reactivity coefficients, thermal parameters and head losses); secondly quantify the contribution of each input uncertainty to the overall uncertainty of the safety margins, in order to refocusing R&D efforts on the most influential factors. This paper focuses on the methodological aspects of the evaluation of the safety margin. At least for the preliminary phase of the project (conceptual design), a probabilistic criterion has been fixed in the context of this BEPU analysis; this criterion is the value of the margin to sodium boiling, which has a probability 95% to be exceeded, obtained with a confidence level of 95% (i.e. the M5,95percentile of the margin distribution). This paper presents two methods used to assess this percentile: the Wilks method and the Bootstrap method ; the effectiveness of the two methods is compared on the basis of 500 simulations performed with theCATHARE2 code. We conclude that, with only 100 simulations performed with the CATHARE2 code, which is a number of simulations workable in the conceptual design phase of the ASTRID project where the models and the hypothesis are often modified, it is best in order to evaluate the percentile M5,95 of the margin to sodium boiling to use the bootstrap method, which will provide a slightly conservative result. On the other hand, in order to obtain an accurate estimation of the percentileM5,95, for the safety report for example, it will be necessary to perform at least 300 simulations with the CATHARE2 code. In this case, both methods (Wilks and Bootstrap) would give equivalent results.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a significant variation in response to the variation in input parameters. However, a substantial variation at regional and temporal scale has been found. Tatang M. A., Pan W., Prinn R G., McRae G. J., An efficient method for parametric uncertainty analysis of numerical geophysical models, J. Gephys. Res., 102, 21925-21932, 1997. Cohen, J.B., and R.G. Prinn, Development of a fast, urban chemistry metamodel for inclusion in global models,Atmos. Chem. Phys., 11, 7629-7656, doi:10.5194/acp-11-7629-2011, 2011. Emmons L. K., Walters S., Hess P. G., Lamarque J. -F., P_ster G. G., Fillmore D., Granier C., Guenther A., Kinnison D., Laepple T., Orlando J., Tie X., Tyndall G., Wiedinmyer C., Baughcum S. L., Kloster J. S., Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4). Geosci. Model Dev., 3, 4367, 2010.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
Robust optimization based energy dispatch in smart grids considering demand uncertainty
NASA Astrophysics Data System (ADS)
Nassourou, M.; Puig, V.; Blesa, J.
2017-01-01
In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Qingling; Ren, Junchao; Zhang, Yanhao
2017-10-01
This paper studies the problem of robust stability and stabilisation for uncertain large-scale interconnected nonlinear descriptor systems via proportional plus derivative state feedback or proportional plus derivative output feedback. The basic idea of this work is to use the well-known differential mean value theorem to deal with the nonlinear model such that the considered nonlinear descriptor systems can be transformed into linear parameter varying systems. By using a parameter-dependent Lyapunov function, a decentralised proportional plus derivative state feedback controller and decentralised proportional plus derivative output feedback controller are designed, respectively such that the closed-loop system is quadratically normal and quadratically stable. Finally, a hypersonic vehicle practical simulation example and numerical example are given to illustrate the effectiveness of the results obtained in this paper.
Robust passivity analysis for discrete-time recurrent neural networks with mixed delays
NASA Astrophysics Data System (ADS)
Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu
2015-02-01
This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.
Improving the Effect and Efficiency of FMD Control by Enlarging Protection or Surveillance Zones
Halasa, Tariq; Toft, Nils; Boklund, Anette
2015-01-01
An epidemic of foot-and-mouth disease (FMD) in a FMD-free country with large exports of livestock and livestock products would result in profound economic damage. This could be reduced by rapid and efficient control of the disease spread. The objectives of this study were to estimate the economic impact of a hypothetical FMD outbreak in Denmark based on changes to the economic assumptions of the model, and to investigate whether the control of an FMD epidemic can be improved by combining the enlargement of protection or surveillance zones with pre-emptive depopulation or emergency vaccination. The stochastic spatial simulation model DTU-DADS was used to simulate the spread of FMD in Denmark. The control strategies were the basic EU and Danish strategy, pre-emptive depopulation, suppressive or protective vaccination, enlarging protection or surveillance zones, and a combination of pre-emptive depopulation or emergency vaccination with enlarged protection or surveillance zones. Herds are detected either based on basic detection through the appearance of clinical signs, or as a result of surveillance in the control zones. The economic analyses consisted of direct costs and export losses. Sensitivity analysis was performed on uncertain and potentially influential input parameters. Enlarging the surveillance zones from 10 to 15 km, combined with pre-emptive depopulation over a 1-km radius around detected herds resulted in the lowest total costs. This was still the case even when the different input parameters were changed in the sensitivity analysis. Changing the resources for clinical surveillance did not affect the epidemic consequences. In conclusion, an FMD epidemic in Denmark would have a larger economic impact on the agricultural sector than previously anticipated. Furthermore, the control of a potential FMD outbreak in Denmark may be improved by combining pre-emptive depopulation with an enlarged protection or surveillance zone. PMID:26664996
Improving the Effect and Efficiency of FMD Control by Enlarging Protection or Surveillance Zones.
Halasa, Tariq; Toft, Nils; Boklund, Anette
2015-01-01
An epidemic of foot-and-mouth disease (FMD) in a FMD-free country with large exports of livestock and livestock products would result in profound economic damage. This could be reduced by rapid and efficient control of the disease spread. The objectives of this study were to estimate the economic impact of a hypothetical FMD outbreak in Denmark based on changes to the economic assumptions of the model, and to investigate whether the control of an FMD epidemic can be improved by combining the enlargement of protection or surveillance zones with pre-emptive depopulation or emergency vaccination. The stochastic spatial simulation model DTU-DADS was used to simulate the spread of FMD in Denmark. The control strategies were the basic EU and Danish strategy, pre-emptive depopulation, suppressive or protective vaccination, enlarging protection or surveillance zones, and a combination of pre-emptive depopulation or emergency vaccination with enlarged protection or surveillance zones. Herds are detected either based on basic detection through the appearance of clinical signs, or as a result of surveillance in the control zones. The economic analyses consisted of direct costs and export losses. Sensitivity analysis was performed on uncertain and potentially influential input parameters. Enlarging the surveillance zones from 10 to 15 km, combined with pre-emptive depopulation over a 1-km radius around detected herds resulted in the lowest total costs. This was still the case even when the different input parameters were changed in the sensitivity analysis. Changing the resources for clinical surveillance did not affect the epidemic consequences. In conclusion, an FMD epidemic in Denmark would have a larger economic impact on the agricultural sector than previously anticipated. Furthermore, the control of a potential FMD outbreak in Denmark may be improved by combining pre-emptive depopulation with an enlarged protection or surveillance zone.
Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios
NASA Astrophysics Data System (ADS)
McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.
2008-12-01
Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan
2017-11-01
A neural network (NN) adaptive control design problem is addressed for a class of uncertain multi-input-multi-output (MIMO) nonlinear systems in block-triangular form. The considered systems contain uncertainty dynamics and their states are enforced to subject to bounded constraints as well as the couplings among various inputs and outputs are inserted in each subsystem. To stabilize this class of systems, a novel adaptive control strategy is constructively framed by using the backstepping design technique and NNs. The novel integral barrier Lyapunov functionals (BLFs) are employed to overcome the violation of the full state constraints. The proposed strategy can not only guarantee the boundedness of the closed-loop system and the outputs are driven to follow the reference signals, but also can ensure all the states to remain in the predefined compact sets. Moreover, the transformed constraints on the errors are used in the previous BLF, and accordingly it is required to determine clearly the bounds of the virtual controllers. Thus, it can relax the conservative limitations in the traditional BLF-based controls for the full state constraints. This conservatism can be solved in this paper and it is for the first time to control this class of MIMO systems with the full state constraints. The performance of the proposed control strategy can be verified through a simulation example.
Benefits estimates of highway capital improvements with uncertain parameters.
DOT National Transportation Integrated Search
2006-01-01
This report warrants consideration in the development of goals, performance measures, and standard cost-benefit methodology required of transportation agencies by the Virginia 2006 Appropriations Act. The Virginia Department of Transportation has beg...
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Niu, Ben; Li, Lu
2018-06-01
This brief proposes a new neural-network (NN)-based adaptive output tracking control scheme for a class of disturbed multiple-input multiple-output uncertain nonlinear switched systems with input delays. By combining the universal approximation ability of radial basis function NNs and adaptive backstepping recursive design with an improved multiple Lyapunov function (MLF) scheme, a novel adaptive neural output tracking controller design method is presented for the switched system. The feature of the developed design is that different coordinate transformations are adopted to overcome the conservativeness caused by adopting a common coordinate transformation for all subsystems. It is shown that all the variables of the resulting closed-loop system are semiglobally uniformly ultimately bounded under a class of switching signals in the presence of MLF and that the system output can follow the desired reference signal. To demonstrate the practicability of the obtained result, an adaptive neural output tracking controller is designed for a mass-spring-damper system.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Jeznach, L. C.; Park, M. H.; Tobiason, J. E.
2016-12-01
Extreme precipitation events such as tropical storms and hurricanes are by their nature rare, yet have disproportionate and adverse effects on surface water quality. In the context of drinking water reservoirs, common concerns of such events include increased erosion and sediment transport and influx of natural organic matter and nutrients. As part of an effort to model the effects of an extreme precipitation event on water quality at the reservoir intake of a major municipal water system, this study sought to estimate extreme-event watershed responses including streamflow and exports of nutrients and organic matter for use as inputs to a 2-D hydrodynamic and water quality reservoir model. Since extreme-event watershed exports are highly uncertain, we characterized and propagated predictive uncertainty using a quasi-Monte Carlo approach to generate reservoir model inputs. Three storm precipitation depths—corresponding to recurrence intervals of 5, 50, and 100 years—were converted to streamflow in each of 9 tributaries by volumetrically scaling 2 storm hydrographs from the historical record. Rating-curve models for concentratoin, calibrated using 10 years of data for each of 5 constituents, were then used to estimate the parameters of a multivariate lognormal probability model of constituent concentrations, conditional on each scenario's storm date and streamflow. A quasi-random Halton sequence (n = 100) was drawn from the conditional distribution for each event scenario, and used to generate input files to a calibrated CE-QUAL-W2 reservoir model. The resulting simulated concentrations at the reservoir's drinking water intake constitute a low-discrepancy sample from the estimated uncertainty space of extreme-event source water-quality. Limiting factors to the suitability of this approach include poorly constrained relationships between hydrology and constituent concentrations, a high-dimensional space from which to generate inputs, and relatively long run-time for the reservoir model. This approach proved useful in probing a water supply's resilience to extreme events, and to inform management responses, particularly in a region such as the American Northeast where climate change is expected to bring such events with higher frequency and intensity than have occurred in the past.
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.
2014-10-01
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
The Importance of Studying Past Extreme Floods to Prepare for Uncertain Future Extremes
NASA Astrophysics Data System (ADS)
Burges, S. J.
2016-12-01
Hoyt and Langbein, 1955 in their book `Floods' wrote: " ..meteorologic and hydrologic conditions will combine to produce superfloods of unprecedented magnitude. We have every reason to believe that in most rivers past floods may not be an accurate measure of ultimate flood potentialities. It is this superflood with which we are always most concerned". I provide several examples to offer some historical perspective on assessing extreme floods. In one example, flooding in the Miami Valley, OH in 1913 claimed 350 lives. The engineering and socio-economic challenges facing the Morgan Engineering Co in how to mitigate against future flood damage and loss of life when limited information was available provide guidance about ways to face an uncertain hydroclimate future, particularly one of a changed climate. A second example forces us to examine mixed flood populations and illustrates the huge uncertainty in assigning flood magnitude and exceedance probability to extreme floods in such cases. There is large uncertainty in flood frequency estimates; knowledge of the total flood hydrograph, not the peak flood flow rate alone, is what is needed for hazard mitigation assessment or design. Some challenges in estimating the complete flood hydrograph in an uncertain future climate, including demands on hydrologic models and their inputs, are addressed.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Polynomial chaos expansion with random and fuzzy variables
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.
2016-06-01
A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.
Uncertainty Quantification in Aeroelasticity
NASA Astrophysics Data System (ADS)
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Li, Yongming; Tong, Shaocheng
2017-12-01
In this paper, an adaptive fuzzy output constrained control design approach is addressed for multi-input multioutput uncertain stochastic nonlinear systems in nonstrict-feedback form. The nonlinear systems addressed in this paper possess unstructured uncertainties, unknown gain functions and unknown stochastic disturbances. Fuzzy logic systems are utilized to tackle the problem of unknown nonlinear uncertainties. The barrier Lyapunov function technique is employed to solve the output constrained problem. In the framework of backstepping design, an adaptive fuzzy control design scheme is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Statistical Performances of Resistive Active Power Splitter
NASA Astrophysics Data System (ADS)
Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul
2016-03-01
In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.
Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu
2015-01-01
This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mishra, H.; Karmakar, S.; Kumar, R.
2016-12-01
Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
NASA Astrophysics Data System (ADS)
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Cox, Louis Anthony Tony
2006-12-01
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Optimal quantum cloning based on the maximin principle by using a priori information
NASA Astrophysics Data System (ADS)
Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming
2016-10-01
We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.
Effective production control in an automotive industry: MRP vs. demand-driven MRP
NASA Astrophysics Data System (ADS)
Shofa, Mohamad Jihan; Widyarto, Wahyu Oktri
2017-06-01
Material Requirements Planning (MRP) has deficiencies when dealing with current business environments, marked by a more complex network, a huge variety of products with longer lead time, and uncertain demands. This drives Demand-Driven MRP (DDMRP) approach to deal with those challenges. DDMRP is designed to connect the availability of materials and supplies directly from the actual condition using bills of materials (BOMs). Nevertheless, only few studies have scientifically proved the performance of DDMRP over MRP for controlling production and inventory control. Therefore, this research fills this gap by evaluating and comparing the performance of DDMRP and MRP in terms of level of effective inventory in the system. The evaluation was conducted through a simulation using data from an automotive company in Indonesia. The input parameters of scenarios were given for running the simulation. Based on the simulation, for the observed critical parts, DDMRP gave better results than MRP in terms of lead time and inventory level. DDMRP compressed the lead time part from 52 to 3 days (94% reduced) and, overall, the inventory level was in an effective condition. This suggests that DDMRP is more effective for controlling the production-inventory than MRP.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
Uncertainty in Simulating Wheat Yields Under Climate Change
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.;
2013-01-01
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
UncertiantyQuantificationinTsunamiEarlyWarningCalculations
NASA Astrophysics Data System (ADS)
Anunziato, Alessandro
2016-04-01
The objective of the Tsunami calculations is the estimation of the impact of waves caused by large seismic events on the coasts and the determination of potential inundation areas. In the case of Early Warning Systems, i.e. systems that should allow to anticipate the possible effects and give the possibility to react consequently (i.e. issue evacuation of areas at risk), this must be done in very short time (minutes) to be effective. In reality, the above estimation includes several uncertainty factors which make the prediction extremely difficult. The quality of the very first estimations of the seismic parameters is not very precise: the uncertainty in the determination of the seismic components (location, magnitude and depth) decreases with time because as time passes it is possible to use more and more seismic signals and the event characterization becomes more precise. On the other hand other parameters that are necessary to establish for the performance of a calculation (i.e. fault mechanism) are difficult to estimate accurately also after hours (and in some cases remain unknown) and therefore this uncertainty remains in the estimated impact evaluations; when a quick tsunami calculation is necessary (early warning systems) the possibility to include any possible future variation of the conditions to establish the "worst case scenario" is particularly important. The consequence is that the number of uncertain parameters is so large that it is not easy to assess the relative importance of each of them and their effect on the predicted results. In general the complexity of system computer codes is generated by the multitude of different models which are assembled into a single program to give the global response for a particular phenomenon. Each of these model has associated a determined uncertainty coming from the application of that model to single cases and/or separated effect test cases. The difficulty in the prediction of a Tsunami calculation response is additionally increased by the not perfect knowledge of the initial and boundary conditions so that the response can change even with small variations of the input. The paper analyses a number of potential events in the Mediterranean Sea and in the Atlantic Ocean and for each of them a large number of calculations is performed (Monte Carlo simulation) in order to identify the relative importance of each of the uncertain parameter that is adopted. It is shown that even if after several hours the variation on the estimate is reduces, still remains and in some cases it can lead to different conclusions if this information is used as alerting method. The cases considered are: a mild event in the Hellenic arc (Mag. 6.9), a relatively medium event in Algeria (Mag. 7.2) and a quite relevant event in the Gulf of Cadiz (Mag. 8.2).
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
Wang, Yong; Zhang, Guang J.
2016-09-29
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yong; Zhang, Guang J.
In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Flach, G. P.
2012-12-01
The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.
NASA Astrophysics Data System (ADS)
Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan
2017-04-01
More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.
NASA Astrophysics Data System (ADS)
Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen
2018-01-01
Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Understanding the Yellowstone magmatic system using 3D geodynamic inverse models
NASA Astrophysics Data System (ADS)
Kaus, B. J. P.; Reuber, G. S.; Popov, A.; Baumann, T.
2017-12-01
The Yellowstone magmatic system is one of the largest magmatic systems on Earth. Recent seismic tomography suggest that two distinct magma chambers exist: a shallow, presumably felsic chamber and a deeper much larger, partially molten, chamber above the Moho. Why melt stalls at different depth levels above the Yellowstone plume, whereas dikes cross-cut the whole lithosphere in the nearby Snake River Plane is unclear. Partly this is caused by our incomplete understanding of lithospheric scale melt ascent processes from the upper mantle to the shallow crust, which requires better constraints on the mechanics and material properties of the lithosphere.Here, we employ lithospheric-scale 2D and 3D geodynamic models adapted to Yellowstone to better understand magmatic processes in active arcs. The models have a number of (uncertain) input parameters such as the temperature and viscosity structure of the lithosphere, geometry and melt fraction of the magmatic system, while the melt content and rock densities are obtained by consistent thermodynamic modelling of whole rock data of the Yellowstone stratigraphy. As all of these parameters affect the dynamics of the lithosphere, we use the simulations to derive testable model predictions such as gravity anomalies, surface deformation rates and lithospheric stresses and compare them with observations. We incorporated it within an inversion method and perform 3D geodynamic inverse models of the Yellowstone magmatic system. An adjoint based method is used to derive the key model parameters and the factors that affect the stress field around the Yellowstone plume, locations of enhanced diking and melt accumulations. Results suggest that the plume and the magma chambers are connected with each other and that magma chamber overpressure is required to explain the surface displacement in phases of high activity above the Yellowstone magmatic system.
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong
2018-03-01
Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yi, Bowen; Lin, Shuyi; Yang, Bo; Zhang, Weidong
2018-02-01
This paper presents an output feedback indirect dynamic inversion (IDI) approach for a class of uncertain nonaffine systems with input unmodelled dynamics. Compared with previous approaches to achieve performance recovery, the proposed method aims at dealing with a broader class of nonaffine-in-control systems with triangular structure. An IDI state feedback law is designed first, in which less knowledge of the model plant is needed compared to earlier approximate dynamic inversion methods, thus yielding more robust performance. After that, an extended high-gain observer is designed to accomplish the task with output feedback. Finally, we prove that the designed IDI controller is equivalent to an adaptive proportional-integral (PI) controller, with respect to both time response equivalence and robustness equivalence. The conclusion implies that for the studied strict-feedback non-affine systems with unmodelled dynamics, there always exits a PI controller to stabilise the systems. The effectiveness and benefits of the designed approach are verified by three examples.
Spatial planning using probabilistic flood maps
NASA Astrophysics Data System (ADS)
Alfonso, Leonardo; Mukolwe, Micah; Di Baldassarre, Giuliano
2015-04-01
Probabilistic flood maps account for uncertainty in flood inundation modelling and convey a degree of certainty in the outputs. Major sources of uncertainty include input data, topographic data, model structure, observation data and parametric uncertainty. Decision makers prefer less ambiguous information from modellers; this implies that uncertainty is suppressed to yield binary flood maps. Though, suppressing information may potentially lead to either surprise or misleading decisions. Inclusion of uncertain information in the decision making process is therefore desirable and transparent. To this end, we utilise the Prospect theory and information from a probabilistic flood map to evaluate potential decisions. Consequences related to the decisions were evaluated using flood risk analysis. Prospect theory explains how choices are made given options for which probabilities of occurrence are known and accounts for decision makers' characteristics such as loss aversion and risk seeking. Our results show that decision making is pronounced when there are high gains and loss, implying higher payoffs and penalties, therefore a higher gamble. Thus the methodology may be appropriately considered when making decisions based on uncertain information.
Optimal Regulation of Structural Systems with Uncertain Parameters.
1981-02-02
been addressed, in part, by Statistical Energy Analysis . Moti- vated by a concern with high frequency vibration and acoustical- structural...Parameter Systems," AFOSR-TR-79-0753 (May, 1979). 25. R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theory and Applications, (M.I.T...Press, Cambridge, Mass., 1975). 26. E. E. Ungar, " Statistical Energy Analysis of Vibrating Systems," Trans. ASME, J. Eng. Ind. 89, 626 (1967). 139 27
Robust interval-based regulation for anaerobic digestion processes.
Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C
2005-01-01
A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.
M, Malahias; H, Gardner; S, Hindocha; A, Juma; Khan, W
2012-01-01
Rheumatoid arthritis is a systemic autoimmune disease of uncertain aetiology, which is characterized primarily by synovial inflammation with secondary skeletal destructions. Rheumatoid Arthritis is diagnosed by the presence of four of the seven diagnostic criteria, defined by The American College of Rheumatology. Approximately half a million adults in the United Kingdom suffer from rheumatoid arthritis with an age prevalence between the second and fourth decades of life; annually approximately 20,000 new cases are diagnosed. The management of Rheumatoid Arthritis is complex; in the initial phase of the disease it primarily depends on pharmacological management. With disease progression, surgical input to correct deformity comes to play an increasingly important role. The treatment of this condition is also intimately coupled with input from both the occupational therapists and physiotherapy. PMID:22423304
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity
NASA Astrophysics Data System (ADS)
Li, Dunzhu; Gurnis, Michael; Stadler, Georg
2017-04-01
We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Water Footprint and Water Consumption for the Main Crops and Biofuels Produced in Brazil
NASA Astrophysics Data System (ADS)
Sun, Y.; Tong, C.; Mansoor, K.; Carroll, S.
2011-12-01
The risk of CO2 leakage into shallow aquifers through various pathways such as faults and abandoned wells is a concern of CO2 geological sequestration. If a leak is detected in an aquifer system, a contingency plan is required to manage the CO2 storage and to protect the groundwater source. Among many remediation and mitigation strategies, the simplest is to stop CO2 leakage at a wellbore. Therefore, it is necessary to address whether and when the CO2 leaks should be sealed, and how much risk can be mitigated. In the presence of various uncertainties, including geological-structure uncertainty and parametric uncertainty, the risk of CO2 leakage into an aquifer needs to be assessed with probabilistic distributions of uncertain parameters. In this study, we developed an integrated model to simulate multiphase flow of CO2 and brine in a deep storage reservoir, through a leaky well at an uncertain location, and subsequently multicomponent reactive transport in a shallow aquifer. Each sub-model covers its domain-specific physics. Uncertainties of geological structure and parameters are considered together with decision variables (CO2 injection rate and mitigation time) for risk assessment of leakage-impacted aquifer volume. High-resolution and less-expensive reduced-order models (ROMs) of risk profiles are approximated as polynomial functions of decision variables and all uncertain parameters. These reduced-order models are then used in the place of computationally-expensive numerical models for future decision-making on if and when the leaky well is sealed. The tradeoff between CO2 storage capacity in the reservoir and the leakage-induced risk in the aquifer is evaluated. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Uncertainty Quantification and Risk Mitigation of CO2 Leakage in Groundwater Aquifers
NASA Astrophysics Data System (ADS)
Sun, Y.; Tong, C.; Mansoor, K.; Carroll, S.
2013-12-01
The risk of CO2 leakage into shallow aquifers through various pathways such as faults and abandoned wells is a concern of CO2 geological sequestration. If a leak is detected in an aquifer system, a contingency plan is required to manage the CO2 storage and to protect the groundwater source. Among many remediation and mitigation strategies, the simplest is to stop CO2 leakage at a wellbore. Therefore, it is necessary to address whether and when the CO2 leaks should be sealed, and how much risk can be mitigated. In the presence of various uncertainties, including geological-structure uncertainty and parametric uncertainty, the risk of CO2 leakage into an aquifer needs to be assessed with probabilistic distributions of uncertain parameters. In this study, we developed an integrated model to simulate multiphase flow of CO2 and brine in a deep storage reservoir, through a leaky well at an uncertain location, and subsequently multicomponent reactive transport in a shallow aquifer. Each sub-model covers its domain-specific physics. Uncertainties of geological structure and parameters are considered together with decision variables (CO2 injection rate and mitigation time) for risk assessment of leakage-impacted aquifer volume. High-resolution and less-expensive reduced-order models (ROMs) of risk profiles are approximated as polynomial functions of decision variables and all uncertain parameters. These reduced-order models are then used in the place of computationally-expensive numerical models for future decision-making on if and when the leaky well is sealed. The tradeoff between CO2 storage capacity in the reservoir and the leakage-induced risk in the aquifer is evaluated. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook
1980-04-01
82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance
Agriculture-driven deforestation in the tropics from 1990-2015: emissions, trends and uncertainties
NASA Astrophysics Data System (ADS)
Carter, Sarah; Herold, Martin; Avitabile, Valerio; de Bruin, Sytze; De Sy, Veronique; Kooistra, Lammert; Rufino, Mariana C.
2018-01-01
Limited data exists on emissions from agriculture-driven deforestation, and available data are typically uncertain. In this paper, we provide comparable estimates of emissions from both all deforestation and agriculture-driven deforestation, with uncertainties for 91 countries across the tropics between 1990 and 2015. Uncertainties associated with input datasets (activity data and emissions factors) were used to combine the datasets, where most certain datasets contribute the most. This method utilizes all the input data, while minimizing the uncertainty of the emissions estimate. The uncertainty of input datasets was influenced by the quality of the data, the sample size (for sample-based datasets), and the extent to which the timeframe of the data matches the period of interest. Area of deforestation, and the agriculture-driver factor (extent to which agriculture drives deforestation), were the most uncertain components of the emissions estimates, thus improvement in the uncertainties related to these estimates will provide the greatest reductions in uncertainties of emissions estimates. Over the period of the study, Latin America had the highest proportion of deforestation driven by agriculture (78%), and Africa had the lowest (62%). Latin America had the highest emissions from agriculture-driven deforestation, and these peaked at 974 ± 148 Mt CO2 yr-1 in 2000-2005. Africa saw a continuous increase in emissions between 1990 and 2015 (from 154 ± 21-412 ± 75 Mt CO2 yr-1), so mitigation initiatives could be prioritized there. Uncertainties for emissions from agriculture-driven deforestation are ± 62.4% (average over 1990-2015), and uncertainties were highest in Asia and lowest in Latin America. Uncertainty information is crucial for transparency when reporting, and gives credibility to related mitigation initiatives. We demonstrate that uncertainty data can also be useful when combining multiple open datasets, so we recommend new data providers to include this information.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Robust root clustering for linear uncertain systems using generalized Lyapunov theory
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1993-01-01
Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik
2018-01-01
This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622
Robust autoassociative memory with coupled networks of Kuramoto-type oscillators
NASA Astrophysics Data System (ADS)
Heger, Daniel; Krischer, Katharina
2016-08-01
Uncertain recognition success, unfavorable scaling of connection complexity, or dependence on complex external input impair the usefulness of current oscillatory neural networks for pattern recognition or restrict technical realizations to small networks. We propose a network architecture of coupled oscillators for pattern recognition which shows none of the mentioned flaws. Furthermore we illustrate the recognition process with simulation results and analyze the dynamics analytically: Possible output patterns are isolated attractors of the system. Additionally, simple criteria for recognition success are derived from a lower bound on the basins of attraction.
UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS
While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...
NASA Astrophysics Data System (ADS)
Simpson, M. J.; Pisani, O.; Lin, L.; Lun, O.; Simpson, A.; Lajtha, K.; Nadelhoffer, K. J.
2015-12-01
The long-term fate of soil carbon reserves with global environmental change remains uncertain. Shifts in moisture, altered nutrient cycles, species composition, or rising temperatures may alter the proportions of above and belowground biomass entering soil. However, it is unclear how long-term changes in plant inputs may alter the composition of soil organic matter (SOM) and soil carbon storage. Advanced molecular techniques were used to assess SOM composition in mineral soil horizons (0-10 cm) after 20 years of Detrital Input and Removal Treatment (DIRT) at the Harvard Forest. SOM biomarkers (solvent extraction, base hydrolysis and cupric (II) oxide oxidation) and both solid-state and solution-state nuclear magnetic resonance (NMR) spectroscopy were used to identify changes in SOM composition and stage of degradation. Microbial activity and community composition were assessed using phospholipid fatty acid (PLFA) analysis. Doubling aboveground litter inputs decreased soil carbon content, increased the degradation of labile SOM and enhanced the sequestration of aliphatic compounds in soil. The exclusion of belowground inputs (No roots and No inputs) resulted in a decrease in root-derived components and enhanced the degradation of leaf-derived aliphatic structures (cutin). Cutin-derived SOM has been hypothesized to be recalcitrant but our results show that even this complex biopolymer is susceptible to degradation when inputs entering soil are altered. The PLFA data indicate that changes in soil microbial community structure favored the accelerated processing of specific SOM components with littler manipulation. These results collectively reveal that the quantity and quality of plant litter inputs alters the molecular-level composition of SOM and in some cases, enhances the degradation of recalcitrant SOM. Our study also suggests that increased litterfall is unlikely to enhance soil carbon storage over the long-term in temperate forests.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
NASA Astrophysics Data System (ADS)
Ahmadian, A.; Ismail, F.; Salahshour, S.; Baleanu, D.; Ghaemi, F.
2017-12-01
The analysis of the behaviors of physical phenomena is important to discover significant features of the character and the structure of mathematical models. Frequently the unknown parameters involve in the models are assumed to be unvarying over time. In reality, some of them are uncertain and implicitly depend on several factors. In this study, to consider such uncertainty in variables of the models, they are characterized based on the fuzzy notion. We propose here a new model based on fractional calculus to deal with the Kelvin-Voigt (KV) equation and non-Newtonian fluid behavior model with fuzzy parameters. A new and accurate numerical algorithm using a spectral tau technique based on the generalized fractional Legendre polynomials (GFLPs) is developed to solve those problems under uncertainty. Numerical simulations are carried out and the analysis of the results highlights the significant features of the new technique in comparison with the previous findings. A detailed error analysis is also carried out and discussed.
Application of TREECS Modeling System to Strontium-90 for Borschi Watershed near Chernobyl, Ukraine.
Johnson, Billy E; Dortch, Mark S
2014-05-01
The Training Range Environmental Evaluation and Characterization System (TREECS™) (http://el.erdc.usace.army.mil/treecs/) is being developed by the U.S. Army Engineer Research and Development Center (ERDC) for the U.S. Army to forecast the fate of munitions constituents (MC) (such as high explosives (HE) and metals) found on firing/training ranges, as well as those subsequently transported to surface water and groundwater. The overall purpose of TREECS™ is to provide environmental specialists with tools to assess the potential for MC migration into surface water and groundwater systems and to assess range management strategies to ensure protection of human health and the environment. The multimedia fate/transport models within TREECS™ are mathematical models of reduced form (e.g., reduced dimensionality) that allow rapid application with less input data requirements compared with more complicated models. Although TREECS™ was developed for the fate of MC from military ranges, it has general applicability to many other situations requiring prediction of contaminant (including radionuclide) fate in multi-media environmental systems. TREECS™ was applied to the Borschi watershed near the Chernobyl Nuclear Power Plant, Ukraine. At this site, TREECS™ demonstrated its use as a modeling tool to predict the fate of strontium 90 ((90)Sr). The most sensitive and uncertain input for this application was the soil-water partitioning distribution coefficient (Kd) for (90)Sr. The TREECS™ soil model provided reasonable estimates of the surface water export flux of (90)Sr from the Borschi watershed when using a Kd for (90)Sr of 200 L/kg. The computed export for the year 2000 was 0.18% of the watershed inventory of (90)Sr compared to the estimated export flux of 0.14% based on field data collected during 1999-2001. The model indicated that assumptions regarding the form of the inventory, whether dissolved or in solid phase form, did not appreciably affect export rates. Also, the percentage of non-exchangeable adsorbed (90)Sr, which is uncertain and affects the amount of (90)Sr available for export, was fixed at 20% based on field data measurements. A Monte Carlo uncertainty analysis was conducted treating Kd as an uncertain input variable with a range of 100-300 L/kg. This analysis resulted in a range of 0.13-0.27% of inventory exported to surface water compared to 0.14% based on measured field data. Based on this model application, it was concluded that the export of (90)Sr from the Borschi watershed to surface water is predominantly a result of soil pore water containing dissolved (90)Sr being diverted to surface waters that eventually flow out of the watershed. The percentage of non-exchangeable adsorbed (90)Sr and the soil-water Kd are the two most sensitive and uncertain factors affecting the amount of export. The 200-year projections of the model showed an exponential decline in (90)Sr export fluxes from the watershed that should drop by a factor of 10 by the year 2100. This presentation will focus on TREECS capabilities and the case study done for the Borschi Watershed. Published by Elsevier Ltd.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
2017-05-01
ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
NASA Langley's Approach to the Sandia's Structural Dynamics Challenge Problem
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kenny, Sean P.; Crespo, Luis G.; Elliott, Kenny B.
2007-01-01
The objective of this challenge is to develop a data-based probabilistic model of uncertainty to predict the behavior of subsystems (payloads) by themselves and while coupled to a primary (target) system. Although this type of analysis is routinely performed and representative of issues faced in real-world system design and integration, there are still several key technical challenges that must be addressed when analyzing uncertain interconnected systems. For example, one key technical challenge is related to the fact that there is limited data on target configurations. Moreover, it is typical to have multiple data sets from experiments conducted at the subsystem level, but often samples sizes are not sufficient to compute high confidence statistics. In this challenge problem additional constraints are placed as ground rules for the participants. One such rule is that mathematical models of the subsystem are limited to linear approximations of the nonlinear physics of the problem at hand. Also, participants are constrained to use these models and the multiple data sets to make predictions about the target system response under completely different input conditions. Our approach involved initially the screening of several different methods. Three of the ones considered are presented herein. The first one is based on the transformation of the modal data to an orthogonal space where the mean and covariance of the data are matched by the model. The other two approaches worked solutions in physical space where the uncertain parameter set is made of masses, stiffnesses and damping coefficients; one matches confidence intervals of low order moments of the statistics via optimization while the second one uses a Kernel density estimation approach. The paper will touch on all the approaches, lessons learned, validation 1 metrics and their comparison, data quantity restriction, and assumptions/limitations of each approach. Keywords: Probabilistic modeling, model validation, uncertainty quantification, kernel density
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
On Non-Linear Sensitivity of Marine Biological Models to Parameter Variations
2007-01-01
M.B., 2002. Understanding uncertain enviromental systems. In: Grasman, J., van Straten, G. (Eds.), Predictability and Nonlinear Modelling in Natural...model evaluations to compute sensitivity indices. Comput. Phys. Commun. 145, 280–297. Saltelli, A., Andres, T.H., Homma, T., 1993. Some new techniques
Stability margin of linear systems with parameters described by fuzzy numbers.
Husek, Petr
2011-10-01
This paper deals with the linear systems with uncertain parameters described by fuzzy numbers. The problem of determining the stability margin of those systems with linear affine dependence of the coefficients of a characteristic polynomial on system parameters is studied. Fuzzy numbers describing the system parameters are allowed to be characterized by arbitrary nonsymmetric membership functions. An elegant solution, graphical in nature, based on generalization of the Tsypkin-Polyak plot is presented. The advantage of the presented approach over the classical robust concept is demonstrated on a control of the Fiat Dedra engine model and a control of the quarter car suspension model.
Airborne measurements of organic bromine compounds in the Pacific tropical tropopause layer
Navarro, Maria A.; Atlas, Elliot L.; Saiz-Lopez, Alfonso; Rodriguez-Lloveras, Xavier; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Filus, Michal; Harris, Neil R. P.; Meneguz, Elena; Ashfold, Matthew J.; Manning, Alistair J.; Cuevas, Carlos A.; Schauffler, Sue M.; Donets, Valeria
2015-01-01
Very short-lived brominated substances (VSLBr) are an important source of stratospheric bromine, an effective ozone destruction catalyst. However, the accurate estimation of the organic and inorganic partitioning of bromine and the input to the stratosphere remains uncertain. Here, we report near-tropopause measurements of organic brominated substances found over the tropical Pacific during the NASA Airborne Tropical Tropopause Experiment campaigns. We combine aircraft observations and a chemistry−climate model to quantify the total bromine loading injected to the stratosphere. Surprisingly, despite differences in vertical transport between the Eastern and Western Pacific, VSLBr (organic + inorganic) contribute approximately similar amounts of bromine [∼6 (4−9) parts per thousand] to the stratospheric input at the tropical tropopause. These levels of bromine cause substantial ozone depletion in the lower stratosphere, and any increases in future abundances (e.g., as a result of aquaculture) will lead to larger depletions. PMID:26504212
Gao, Fangzheng; Yuan, Ye; Wu, Yuqiang
2016-09-01
This paper studies the problem of finite-time stabilization by state feedback for a class of uncertain nonholonomic systems in feedforward-like form subject to inputs saturation. Under the weaker homogeneous condition on systems growth, a saturated finite-time control scheme is developed by exploiting the adding a power integrator method, the homogeneous domination approach and the nested saturation technique. Together with a novel switching control strategy, the designed saturated controller guarantees that the states of closed-loop system are regulated to zero in a finite time without violation of the constraint. As an application of the proposed theoretical results, the problem of saturated finite-time control for vertical wheel on rotating table is solved. Simulation results are given to demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Li, Dong-Juan; Li, Da-Peng
2017-09-14
In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Credibility of Uncertainty Analyses for 131-I Pathway Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, F O.; Anspaugh, L. R.; Apostoaei, A. I.
2004-05-01
We would like to make your readers aware of numerous concerns we have with respect to the paper by A. A. Simpkins and D. M. Hamby on Uncertainty in transport factors used to calculate historic dose from 131I releases at the Savannah River Site. The paper by Simpkins and Hamby concludes by saying their uncertainty analysis would add credibility to current dose reconstruction efforts of public exposures to historic releases of 131I from the operations at the Savannah River Site, yet we have found their paper to be afflicted with numerous errors in assumptions and methodology, which in turn leadmore » to grossly misleading conclusions. Perhaps the most egregious errors are their conclusions, which state that: a. the vegetable pathway, not the ingestion of fresh milk, was the main contributor to thyroid dose for exposure to 131I (even though dietary intake of vegetables was less in the past than at present), and b. the probability distribution assigned to the fraction of iodine released in the elemental form (Uniform 0, 0.6) is responsible for 64.6% of the total uncertainty in thyroid dose, given a unit release of 131I to the atmosphere. The assumptions used in the paper by Simpkins and Hamby lead to a large overestimate of the contamination of vegetables by airborne 131I. The interception by leafy and non-leafy vegetables of freshly deposited 131I is known to be highly dependent on the growth form of the crop and the standing crop biomass of leafy material. Unrealistic assumptions are made for losses of 131I from food processing, preparation, and storage prior to human consumption. These assumptions tend to bias their conclusions toward an overestimate of the amount of 131I retained by vegetation prior to consumption. For example, the generic assumption of a 6-d hold-up time is used for the loss from radioactive decay for the time period from harvest to human consumption of fruits, vegetables, and grains. We anticipate hold-up times of many weeks, if not months, between harvest and consumption for most grains and non-leafy forms of vegetation. The combined assumptions made by Simpkins and Hamby about the fraction of fresh deposition intercepted by vegetation, and the rather short hold-up time for most vegetables consumed, probably caused the authors to conclude that the consumption of 131I-contaminated vegetables was more important to dose than was the consumption of fresh sources of milk. This conclusion is surprising, given that the consumption rate assumed for whole milk was rather large and that the value of the milk transfer coefficient was also higher and more uncertain than most distributions reported in the literature. In our experience, the parameters contributing most to the uncertainty in dose for the 131I air-deposition-vegetation-milk-human-thyroid pathway are the deposition velocity for elemental iodine, the mass interception factor for pasture vegetation, the milk transfer coefficient, and the thyroid dose conversion factor. In none of our previous investigations has the consumption of fruits, vegetables, and grains been the dominant contributor to the thyroid dose (or the uncertainty in dose) when the individual also was engaged in the consumption of even moderate quantities of fresh milk. The results of the relative contribution of uncertain input parameters to the overall uncertainty in exposure are counterintuitive. We suspect that calculational errors may have occurred in their application of the software that was used to estimate the relative sensitivity for each uncertain input variable. Their claim that the milk transfer coefficient contributed only 4% to the total uncertainty in the aggregated transfer from release to dose, and that the uncertainty in the vegetation interception fraction contributed only 3.3%, despite relatively large uncertainties assigned to both of these variables, violates our sense of face validity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...
Adaptive proximate time-optimal servomechanisms - Continuous time case
NASA Technical Reports Server (NTRS)
Workman, M. L.; Kosut, R. L.; Franklin, G. F.
1987-01-01
A Proximate Time-Optimal Servo (PTOS) is developed, along with conditions for its stability. An algorithm is proposed for adapting the PTOS (APTOS) to improve performance in the face of uncertain plant parameters. Under ideal conditions APTOS is shown to be uniformly asymptotically stable. Simulation results demonstrate the predicted performance.
NASA Astrophysics Data System (ADS)
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic parameters of the system.
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Oblozinsky, P.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Capote,R.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
An in-premise model for Legionella exposure during showering events.
Schoen, Mary E; Ashbolt, Nicholas J
2011-11-15
An exposure model was constructed to predict the critical Legionella densities in an engineered water system that result in infection from inhalation of aerosols containing the pathogen while showering. The model predicted the Legionella densities in the shower air, water and in-premise plumbing biofilm that might result in a deposited dose of Legionella in the alveolar region of the lungs associated with infection for a routine showering event. Processes modeled included the detachment of biofilm-associated Legionella from the in-premise plumbing biofilm during a showering event, the partitioning of the pathogen from the shower water to the air, and the inhalation and deposition of particles in the lungs. The range of predicted critical Legionella densities in the air and water was compared to the available literature. The predictions were generally within the limited set of observations for air and water, with the exception of Legionella density within in-premise plumbing biofilms, for which there remains a lack of observations for comparison. Sensitivity analysis of the predicted results to possible changes in the uncertain input parameters identified the target deposited dose associated with infections, the pathogen air-water partitioning coefficient, and the quantity of detached biofilm from in-premise pluming surfaces as important parameters for additional data collection. In addition, the critical density of free-living protozoan hosts in the biofilm required to propagate the infectious Legionella was estimated. Together, this evidence can help to identify critical conditions that might lead to infection derived from pathogens within the biofilms of any plumbing system from which humans may be exposed to aerosols. Published by Elsevier Ltd.
Esralew, Rachel A; Flint, Lorraine; Thorne, James H; Boynton, Ryan; Flint, Alan
2016-07-01
Climate-change adaptation planning for managed wetlands is challenging under uncertain futures when the impact of historic climate variability on wetland response is unquantified. We assessed vulnerability of Modoc National Wildlife Refuge (MNWR) through use of the Basin Characterization Model (BCM) landscape hydrology model, and six global climate models, representing projected wetter and drier conditions. We further developed a conceptual model that provides greater value for water managers by incorporating the BCM outputs into a conceptual framework that links modeled parameters to refuge management outcomes. This framework was used to identify landscape hydrology parameters that reflect refuge sensitivity to changes in (1) climatic water deficit (CWD) and recharge, and (2) the magnitude, timing, and frequency of water inputs. BCM outputs were developed for 1981-2100 to assess changes and forecast the probability of experiencing wet and dry water year types that have historically resulted in challenging conditions for refuge habitat management. We used a Yule's Q skill score to estimate the probability of modeled discharge that best represents historic water year types. CWD increased in all models across 72.3-100 % of the water supply basin by 2100. Earlier timing in discharge, greater cool season discharge, and lesser irrigation season water supply were predicted by most models. Under the worst-case scenario, moderately dry years increased from 10-20 to 40-60 % by 2100. MNWR could adapt by storing additional water during the cool season for later use and prioritizing irrigation of habitats during dry years.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Damage identification using inverse methods.
Friswell, Michael I
2007-02-15
This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
NASA Astrophysics Data System (ADS)
Nemirsky, Kristofer Kevin
In this thesis, the history and evolution of rotor aircraft with simulated annealing-based PID application were reviewed and quadcopter dynamics are presented. The dynamics of a quadcopter were then modeled, analyzed, and linearized. A cascaded loop architecture with PID controllers was used to stabilize the plant dynamics, which was improved upon through the application of simulated annealing (SA). A Simulink model was developed to test the controllers and verify the functionality of the proposed control system design. In addition, the data that the Simulink model provided were compared with flight data to present the validity of derived dynamics as a proper mathematical model representing the true dynamics of the quadcopter system. Then, the SA-based global optimization procedure was applied to obtain optimized PID parameters. It was observed that the tuned gains through the SA algorithm produced a better performing PID controller than the original manually tuned one. Next, we investigated the uncertain dynamics of the quadcopter setup. After adding uncertainty to the gyroscopic effects associated with pitch-and-roll rate dynamics, the controllers were shown to be robust against the added uncertainty. A discussion follows to summarize SA-based algorithm PID controller design and performance outcomes. Lastly, future work on SA application on multi-input-multi-output (MIMO) systems is briefly discussed.
Response time correlations for platinum resistance thermometers in flowing fluids
NASA Technical Reports Server (NTRS)
Pandey, D. K.; Ash, R. L.
1985-01-01
The thermal response of two types of Platinum Resistance Thermometers (PRT's), which are being considered for use in the National Transonic Wind Tunnel Facility, were studied. Response time correlations for each PRT, in flowing water, oil and air, were established separately. A universal correlation, tau WOA = 2.0 + 1264, 9/h, for a Hy-Cal Sensor (with a reference resistance of 100 ohm) within an error of 20% was established while the universal correlation for the Rosemount Sensor (with a reference resistance of 1000 ohm), tau OA = 0.122 + 1105.6/h, was found with a maximum percentage error of 30%. The correlation for the Rosemount Sensor was based on air and oil data only which is certainly not sufficient to make a correlation applicable to every condition. Therefore, the correlation needs more data to be gathered in different fluids. Also, it is necessary to state that the calculation of the parameter, h, was based on the available heat transfer correlations, whose accuracies are already reported in literature uncertain within 20-30%. Therefore, the universal response constant correlations established here for the Hy-Cal and Rosemount sensors are consistent with the uncertainty in the input data and are recommended for future use in flowing liquids and gases.
Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.
2013-12-01
Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.
Developing and applying metamodels of high resolution ...
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or statistical relationships between a model’s input and output variables for model analysis, interpretation, or information consumption by users’ clients; (2) quantifying a model’s sensitivity to alternative or uncertain forcing functions, initial conditions, or parameters; and (3) characterizing the model’s response or state space. Using five existing models developed by US Environmental Protection Agency, we generate a metamodeling database of the expected environmental and biological concentrations of 644 organic chemicals released into nine US rivers from wastewater treatment works (WTWs) assuming multiple loading rates and sizes of populations serviced. The chemicals of interest have log n-octanol/water partition coefficients ( ) ranging from 3 to 14, and the rivers of concern have mean annual discharges ranging from 1.09 to 3240 m3/s. Log linear regression models are derived to predict mean annual dissolved and total water concentrations and total sediment concentrations of chemicals of concern based on their , Henry’s Law Constant, and WTW loading rate and on the mean annual discharges of the receiving rivers. Metamodels are also derived to predict mean annual chemical
Energy Return On Investment of Engineered Geothermal Systems Data
Mansure, Chip
2012-01-01
The project provides an updated Energy Return on Investment (EROI) for Enhanced Geothermal Systems (EGS). Results incorporate Argonne National Laboratory's Life Cycle Assessment and base case assumptions consistent with other projects in the Analysis subprogram. EROI is a ratio of the energy delivered to the consumer to the energy consumed to build, operate, and decommission the facility. EROI is important in assessing the viability of energy alternatives. Currently EROI analyses of geothermal energy are either out-of-date, of uncertain methodology, or presented online with little supporting documentation. This data set is a collection of files documenting data used to calculate the Energy Return On Investment (EROI) of Engineered Geothermal Systems (EGS) and erratum to publications prior to the final report. Final report is available from the OSTI web site (http://www.osti.gov/geothermal/). Data in this collections includes the well designs used, input parameters for GETEM, a discussion of the energy needed to haul materials to the drill site, the baseline mud program, and a summary of the energy needed to drill each of the well designs. EROI is the ratio of the energy delivered to the customer to the energy consumed to construct, operate, and decommission the facility. Whereas efficiency is the ratio of the energy delivered to the customer to the energy extracted from the reservoir.
Coherent Evaluation of Aerosol Data Products from Multiple Satellite Sensors
NASA Technical Reports Server (NTRS)
Ichoku, Charles
2011-01-01
Aerosol retrieval from satellite has practically become routine, especially during the last decade. However, there is often disagreement between similar aerosol parameters retrieved from different sensors, thereby leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus, and the inconsistencies are not well characterized and understood, there will be no way of developing reliable model inputs and climate data records from satellite aerosol measurements. Fortunately, the Aerosol Robotic Network (AERONET) is providing well-calibrated globally representative ground-based aerosol measurements corresponding to the satellite-retrieved products. Through a recently developed web-based Multi-sensor Aerosol Products Sampling System (MAPSS), we are utilizing the advantages offered by collocated AERONET and satellite products to characterize and evaluate aerosol retrieval from multiple sensors. Indeed, MAPSS and its companion statistical tool AeroStat are facilitating detailed comparative uncertainty analysis of satellite aerosol measurements from Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP. In this presentation, we will describe the strategy of the MAPSS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainly analysis of aerosol products from multiple satellite sensors.
STEM Educators' Integration of Formative Assessment in Teaching and Lesson Design
NASA Astrophysics Data System (ADS)
Moreno, Kimberly A.
Air-breathing hypersonic vehicles, when fully developed, will offer travel in the atmosphere at unprecendented speeds. Capturing their physical behavior by analytical / numerical models is still a major challenge, still limiting the development of controls technology for such vehicles. To study, in an exploratory manner, active control of air-breathing hypersonic vehicles, an analtical, simplified, model of a generic hypersonic air-breathing vehicle in flight was developed by researchers at the Air Force Research Labs in Dayton, Ohio, along with control laws. Elevator deflection and fuel-to-air ratio were used as inputs. However, that model is very approximate, and the field of hypersonics still faces many unknowns. This thesis contributes to the study of control of air-breating hypersonic vehicles in a number of ways: First, regarding control laws synthesis, optimal gains are chosen for the previously developed control law alongside an alternate control law modified from existing literature by minimizing the Lyapunov function derivative using Monte Carlo simulation. This is followed by analysis of the robustness of the control laws in the face of system parametric uncertainties using Monte Carlo simulations. The resulting statistical distributions of the commanded response are analyzed, and linear regression is used to determine, via sensitivity analysis, which uncertain parameters have the largest impact on the desired outcome.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Shetty, N; Shemko, M; Abbas, A
2004-03-01
The objectives were to study knowledge, attitudes, and practices (KAP) regarding tuberculosis (TB) among Somalian subjects in inner London. We administered structured, fixed response KAP questionnaires to 23 patients (culture proved TB), and two groups of controls: 25 contacts (family members) and 27 lay controls (general Somali immigrant population). Responses were summed on a five-point scale. Most were aware of the infectious nature of TB but uncertain of other risk factors. Many were uncertain about coping with the disease and its effect on lifestyle. Belief in biomedicine for TB was unequivocal with men having a significantly higher belief score than women (p = 0.02); the need to comply with TB medication was unambiguously understood. Somalians interviewed were educated, multilingual, and aware of important health issues. Uncertainties in core TB knowledge need to be addressed with direct educational input, especially in women and recent entrants into the country. Volunteers from the established Somalian community could play a valuable part as links in the community to fight TB.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Huang, Tingwen; Li, Chuandong; Duan, Shukai; Starzyk, Janusz A
2012-06-01
This paper focuses on the hybrid effects of parameter uncertainty, stochastic perturbation, and impulses on global stability of delayed neural networks. By using the Ito formula, Lyapunov function, and Halanay inequality, we established several mean-square stability criteria from which we can estimate the feasible bounds of impulses, provided that parameter uncertainty and stochastic perturbations are well-constrained. Moreover, the present method can also be applied to general differential systems with stochastic perturbation and impulses.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Application of artificial neural networks to assess pesticide contamination in shallow groundwater
Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.
2006-01-01
In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Adaptive and neuroadaptive control for nonnegative and compartmental dynamical systems
NASA Astrophysics Data System (ADS)
Volyanskyy, Kostyantyn Y.
Neural networks have been extensively used for adaptive system identification as well as adaptive and neuroadaptive control of highly uncertain systems. The goal of adaptive and neuroadaptive control is to achieve system performance without excessive reliance on system models. To improve robustness and the speed of adaptation of adaptive and neuroadaptive controllers several controller architectures have been proposed in the literature. In this dissertation, we develop a new neuroadaptive control architecture for nonlinear uncertain dynamical systems. The proposed framework involves a novel controller architecture with additional terms in the update laws that are constructed using a moving window of the integrated system uncertainty. These terms can be used to identify the ideal system weights of the neural network as well as effectively suppress system uncertainty. Linear and nonlinear parameterizations of the system uncertainty are considered and state and output feedback neuroadaptive controllers are developed. Furthermore, we extend the developed framework to discrete-time dynamical systems. To illustrate the efficacy of the proposed approach we apply our results to an aircraft model with wing rock dynamics, a spacecraft model with unknown moment of inertia, and an unmanned combat aerial vehicle undergoing actuator failures, and compare our results with standard neuroadaptive control methods. Nonnegative systems are essential in capturing the behavior of a wide range of dynamical systems involving dynamic states whose values are nonnegative. A sub-class of nonnegative dynamical systems are compartmental systems. These systems are derived from mass and energy balance considerations and are comprised of homogeneous interconnected microscopic subsystems or compartments which exchange variable quantities of material via intercompartmental flow laws. In this dissertation, we develop direct adaptive and neuroadaptive control framework for stabilization, disturbance rejection and noise suppression for nonnegative and compartmental dynamical systems with noise and exogenous system disturbances. We then use the developed framework to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of continuing hemorrhage and hemodilution. Critical care patients, whether undergoing surgery or recovering in intensive care units, require drug administration to regulate physiological variables such as blood pressure, cardiac output, heart rate, and degree of consciousness. The rate of infusion of each administered drug is critical, requiring constant monitoring and frequent adjustments. In this dissertation, we develop a neuroadaptive output feedback control framework for nonlinear uncertain nonnegative and compartmental systems with nonnegative control inputs and noisy measurements. The proposed framework is Lyapunov-based and guarantees ultimate boundedness of the error signals. In addition, the neuroadaptive controller guarantees that the physical system states remain in the nonnegative orthant of the state space. Finally, the developed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of noisy electroencephalographic (EEG) measurements. Clinical trials demonstrate excellent regulation of unconsciousness allowing for a safe and effective administration of the anesthetic agent propofol. Furthermore, a neuroadaptive output feedback control architecture for nonlinear nonnegative dynamical systems with input amplitude and integral constraints is developed. Specifically, the neuroadaptive controller guarantees that the imposed amplitude and integral input constraints are satisfied and the physical system states remain in the nonnegative orthant of the state space. The proposed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for noncardiac surgery in the face of infusion rate constraints and a drug dosing constraint over a specified period. In addition, the aforementioned control architecture is used to control lung volume and minute ventilation with input pressure constraints that also accounts for spontaneous breathing by the patient. Specifically, we develop a pressure- and work-limited neuroadaptive controller for mechanical ventilation based on a nonlinear multi-compartmental lung model. The control framework does not rely on any averaged data and is designed to automatically adjust the input pressure to the patient's physiological characteristics capturing lung resistance and compliance modeling uncertainty. Moreover, the controller accounts for input pressure constraints as well as work of breathing constraints. The effect of spontaneous breathing is incorporated within the lung model and the control framework. Finally, a neural network hybrid adaptive control framework for nonlinear uncertain hybrid dynamical systems is developed. The proposed hybrid adaptive control framework is Lyapunov-based and guarantees partial asymptotic stability of the closed-loop hybrid system; that is, asymptotic stability with respect to part of the closed-loop system states associated with the hybrid plant states. A numerical example is provided to demonstrate the efficacy of the proposed hybrid adaptive stabilization approach.
Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates
NASA Technical Reports Server (NTRS)
Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.
1997-01-01
Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
A short circuit in thermohaline circulation: A cause for northern hemisphere glaciation?
Driscoll; Haug
1998-10-16
The cause of Northern Hemisphere glaciation about 3 million years ago remains uncertain. Closing the Panamanian Isthmus increased thermohaline circulation and enhanced moisture supply to high latitudes, but the accompanying heat would have inhibited ice growth. One possible solution is that enhanced moisture transported to Eurasia also enhanced freshwater delivery to the Arctic via Siberian rivers. Freshwater input to the Arctic would facilitate sea ice formation, increase the albedo, and isolate the high heat capacity of the ocean from the atmosphere. It would also act as a negative feedback on the efficiency of the "conveyor belt" heat pump.
Time-delayed chameleon: Analysis, synchronization and FPGA implementation
NASA Astrophysics Data System (ADS)
Rajagopal, Karthikeyan; Jafari, Sajad; Laarem, Guessas
2017-12-01
In this paper we report a time-delayed chameleon-like chaotic system which can belong to different families of chaotic attractors depending on the choices of parameters. Such a characteristic of self-excited and hidden chaotic flows in a simple 3D system with time delay has not been reported earlier. Dynamic analysis of the proposed time-delayed systems are analysed in time-delay space and parameter space. A novel adaptive modified functional projective lag synchronization algorithm is derived for synchronizing identical time-delayed chameleon systems with uncertain parameters. The proposed time-delayed systems and the synchronization algorithm with controllers and parameter estimates are then implemented in FPGA using hardware-software co-simulation and the results are presented.
Knowledge system and method for simulating chemical controlled release device performance
Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.
1991-01-01
A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
Jamieson, Terra S; Schiff, Sherry L; Taylor, William D
2013-02-01
Gas exchange can be a key component of the dissolved oxygen (DO) mass balance in aquatic ecosystems. Quantification of gas transfer rates is essential for the estimation of DO production and consumption rates, and determination of assimilation capacities of systems receiving organic inputs. Currently, the accurate determination of gas transfer rate is a topic of debate in DO modeling, and there are a wide variety of approaches that have been proposed in the literature. The current study investigates the use of repeated measures of stable isotopes of O₂ and DO and a dynamic dual mass-balance model to quantify gas transfer coefficients (k) in the Grand River, Ontario, Canada. Measurements were conducted over a longitudinal gradient that reflected watershed changes from agricultural to urban. Values of k in the Grand River ranged from 3.6 to 8.6 day⁻¹, over discharges ranging from 5.6 to 22.4 m³ s⁻¹, with one high-flow event of 73.1 m³ s⁻¹. The k values were relatively constant over the range of discharge conditions studied. The range in discharge observed in this study is generally representative of non-storm and summer low-flow events; a greater range in k might be observed under a wider range of hydrologic conditions. Overall, k values obtained with the dual model for the Grand River were found to be lower than predicted by the traditional approaches evaluated, highlighting the importance of determining site-specific values of k. The dual mass balance approach provides a more constrained estimate of k than using DO only, and is applicable to large rivers where other approaches would be difficult to use. The addition of an isotopic mass balance provides for a corroboration of the input parameter estimates between the two balances. Constraining the range of potential input values allows for a direct estimate of k in large, productive systems where other k-estimation approaches may be uncertain or logistically infeasible. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
In this paper, neurodynamic programming-based output feedback boundary control of distributed parameter systems governed by uncertain coupled semilinear parabolic partial differential equations (PDEs) under Neumann or Dirichlet boundary control conditions is introduced. First, Hamilton-Jacobi-Bellman (HJB) equation is formulated in the original PDE domain and the optimal control policy is derived using the value functional as the solution of the HJB equation. Subsequently, a novel observer is developed to estimate the system states given the uncertain nonlinearity in PDE dynamics and measured outputs. Consequently, the suboptimal boundary control policy is obtained by forward-in-time estimation of the value functional using a neural network (NN)-based online approximator and estimated state vector obtained from the NN observer. Novel adaptive tuning laws in continuous time are proposed for learning the value functional online to satisfy the HJB equation along system trajectories while ensuring the closed-loop stability. Local uniformly ultimate boundedness of the closed-loop system is verified by using Lyapunov theory. The performance of the proposed controller is verified via simulation on an unstable coupled diffusion reaction process.
Probabilistic accounting of uncertainty in forecasts of species distributions under climate change
Seth J. Wenger; Nicholas A. Som; Daniel C. Dauwalter; Daniel J. Isaak; Helen M. Neville; Charles H. Luce; Jason B. Dunham; Michael K. Young; Kurt D. Fausch; Bruce E. Rieman
2013-01-01
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing...
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Incentive Control Strategies for Decision Problems with Parametric Uncertainties
NASA Astrophysics Data System (ADS)
Cansever, Derya H.
The central theme of this thesis is the design of incentive control policies in large scale systems with hierarchical decision structures, under the stipulation that the objective functionals of the agents at the lower level of the hierarchy are uncertain to the top-level controller (the leader). These uncertainties are modeled as a finite -dimensional parameter vector whose exact value constitutes private information to the relevant agent at the lower level. The approach we have adopted is to design incentive policies for the leader such that the dependence of the decision of the agents on the uncertain parameter is minimized. We have identified several classes of problems for which this approach is feasible. In particular, we have constructed policies whose performance is arbitrarily close to the solution of a version of the same problem that does not involve uncertainties. We have also shown that for a certain class of problem wherein the leader observes a linear combination of the agents' decisions, the leader can achieve the performance he would obtain if he had observed each decision separately.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Echolalic responses by a child with autism to four experimental conditions of sociolinguistic input.
Violette, J; Swisher, L
1992-02-01
Studies of the immediate verbal imitations (IVIs) of subjects with echolalia report that features of linguistic or social input alone affect the number of IVIs elicited. This experimental study of a child with echolalia and autism controlled each of these variables while introducing a systematic change in the other. The subject produced more (p less than .05) IVIs in response to unknown lexical words presented with a high degree of directiveness (Condition D) than in response to three other conditions of stimulus presentation (e.g., unknown lexical words, minimally directive style.) Thus, an interaction between the effects of linguistic and social input was demonstrated. IVIs were produced across all conditions, primarily during first presentations of lexical stimuli. Only the IVIs elicited by first presentations of the lexical stimuli during Condition D differed significantly (p less than .05) from the number of IVIs elicited by first presentations of lexical stimuli in other conditions. These findings viewed together suggest that the occurrence of IVIs was related, at least for this child, to an uncertain or informative event and that this response was significantly greater when the lexical stimuli were unknown and presented in a highly directive style.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Update on ɛK with lattice QCD inputs
NASA Astrophysics Data System (ADS)
Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon
2018-03-01
We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
NASA Astrophysics Data System (ADS)
Catinari, Federico; Pierdicca, Alessio; Clementi, Francesco; Lenci, Stefano
2017-11-01
The results of an ambient-vibration based investigation conducted on the "Palazzo del Podesta" in Montelupone (Italy) is presented. The case study was damaged during the 20I6 Italian earthquakes that stroke the central part of the Italy. The assessment procedure includes full-scale ambient vibration testing, modal identification from ambient vibration responses, finite element modeling and dynamic-based identification of the uncertain structural parameters of the model. A very good match between theoretical and experimental modal parameters was reached and the model updating has been performed identifying some structural parameters.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Understanding earth system models: how Global Sensitivity Analysis can help
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
NASA Astrophysics Data System (ADS)
Baker, A. R.; Lesworth, T.; Adams, C.; Jickells, T. D.; Ganzeveld, L.
2010-09-01
Atmospheric nitrogen inputs to the ocean are estimated to have increased by up to a factor of three as a result of increased anthropogenic emissions over the last 150 years, with further increases expected in the short- to mid-term at least. Such estimates are largely based on emissions and atmospheric transport modeling, because, apart from a few island sites, there is very little observational data available for atmospheric nitrogen concentrations over the remote ocean. Here we use samples of rainwater and aerosol we obtained during 12 long-transect cruises across the Atlantic Ocean between 50°N and 50°S as the basis for a climatological estimate of nitrogen inputs to the basin. The climatology is for the 5 years 2001-2005, during which almost all of the cruises took place, and includes dry and wet deposition of nitrate and ammonium explicitly, together with a more uncertain estimate of soluble organic nitrogen deposition. Our results indicate that nitrogen inputs into the region were ˜850-1420 Gmol (12-20 Tg) N yr-1, with ˜78-85% of this in the form of wet deposition. Inputs were greater in the Northern Hemisphere and in wet regions, and wet regions had a greater proportion of input via wet deposition. The largest uncertainty in our estimate of dry inputs is associated with variability in deposition velocities, while the largest uncertainty in our wet nitrogen input estimate is due to the limited amount and uneven geographic distribution of observational data. We also estimate a lower limit of dry deposition of phosphate to be ˜0.19 Gmol P yr-1, using data from the same cruises. We compare our results to several recent estimates of N and P deposition to the Atlantic and discuss the likely sources of uncertainty, such as the potential seasonal bias introduced by our sampling, on our climatology.
GESAMP Working Group 38, The Atmospheric Input of Chemicals to the Ocean
NASA Astrophysics Data System (ADS)
Duce, Robert; Liss, Peter
2014-05-01
There is growing recognition of the impact of the atmospheric input of both natural and anthropogenic substances on ocean chemistry, biology, and biogeochemistry as well as climate. These inputs are closely related to a number of important global change issues. For example, the increasing input of anthropogenic nitrogen species from the atmosphere to much of the ocean may cause a low level fertilization that could result in an increase in marine 'new' productivity of up to ~3% and thus impact carbon drawdown from the atmosphere. Similarly, much of the oceanic iron, which is a limiting nutrient in significant areas of the ocean, originates from the atmospheric input of minerals as a result of the long-range transport of mineral dust from continental regions. The increased supply of soluble phosphorus from atmospheric anthropogenic sources (through large-scale use of fertilizers) may also have a significant impact on surface-ocean biogeochemistry, but estimates of any effects are highly uncertain. There have been few assessments of the atmospheric inputs of sulfur and nitrogen oxides to the ocean and their impact on the rates of ocean acidification. These inputs may be particularly critical in heavily trafficked shipping lanes and in ocean regions proximate to highly industrialized land areas. Other atmospheric substances may also have an impact on the ocean, in particular lead, cadmium, and POPs. To address these and related issues the United Nations Group of Experts on the Scientific Aspects of Marine Environmental Protection (GESAMP) initiated Working Group 38, The Atmospheric Input of Chemicals to the Ocean, in 2008. This Working Group has had four meetings. To date four peer reviewed papers have been produced from this effort, with a least eight others in the process of being written or published. This paper will discuss some of the results of the Working Group's deliberations and its plans for possible future work.
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Predicting Vegetation Condition from ASCAT Soil Water Index over Southwest India
NASA Astrophysics Data System (ADS)
Pfeil, Isabella Maria; Hochstöger, Simon; Amarnath, Giriraj; Pani, Peejush; Enenkel, Markus; Wagner, Wolfgang
2017-04-01
In India, extreme water scarcity events are expected to occur on average every five years. Record-breaking droughts affecting millions of human beings and livestock are common. If the south-west monsoon (summer monsoon) is delayed or brings less rainfall than expected, a season's harvest can be destroyed despite optimal farm management, leading to, in the worst case, life-threatening circumstances for a large number of farmers. Therefore, the monitoring of key drought indicators, such as the healthiness of the vegetation, and subsequent early warning is crucial. The aim of this work is to predict vegetation state from earth observation data instead of relying on models which need a lot of input data, increasing the complexity of error propagation, or seasonal forecasts, that are often too uncertain to be used as a regression component for a vegetation parameter. While precipitation is the main water supply for large parts of India's agricultural areas, vegetation datasets such as the Normalized Difference Vegetation Index (NDVI) provide reliable estimates of vegetation greenness that can be related to vegetation health. Satellite-derived soil moisture represents the missing link between a deficit in rainfall and the response of vegetation. In particular the water available in the root zone plays an important role for near-future vegetation health. Exploiting the added-value of root zone soil moisture is therefore crucial, and its use in vegetation studies presents an added value for drought analyses and decision-support. The soil water index (SWI) dataset derived from the Advanced Scatterometer (ASCAT) on board the Metop satellites represents the water content that is available in the root zone. This dataset shows a strong correlation with NDVI data obtained from measurements of the Moderate Resolution Imaging Spectroradiometer (MODIS), which is exploited in this study. A linear regression function is fit to the multi-year SWI and NDVI dataset with a temporal resolution of eight days, returning a set of parameters for every eight-day period of the year. Those parameters are then used to predict vegetation health based on the SWI up to 32 days after the latest available SWI and NDVI observations. In this work, the prediction was carried out for multiple eight-day periods in the year 2015 for three representative districts in India, and then compared to the actually observed NDVI during these periods, showing very similar spatial patterns in most analyzed regions and periods. This approach enables the prediction of vegetation health based on root zone soil moisture instead of relying on agro-meteorological models which often lack crucial input data in remote regions.
Application of the Tor Vergata Scattering Model to L Band Backscatter During the Corn Growth Cycle
NASA Astrophysics Data System (ADS)
Joseph, A. T.; van der Velde, R.; Choudhury, B. J.; Ferrazzoli, P.; O'Neill, P. E.; Kim, E. J.; Lang, R. H.; Gish, T.
2010-12-01
At the USDA’s Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) experimental site in Beltsville (Maryland, USA) a field campaign took place throughout the 2002 corn growth cycle from May 10th (emergence of corn crops) to October 2nd (harvest). One of the microwave instruments deployed was the multi-frequency (X-, C- and L-band) quad-polarized (HH, HV, VV, VH) NASA GSFC / George Washington University (GWU) truck mounted radar. During the field campaign, this radar system provided once a week fully polarized C- and L-band (4.75 and 1.6 GHz) backscatter measurements from incidence angle of 15, 35, and 55 degrees. In support of these microwave observations, an extensive ground characterization took place, which included measurements of surface roughness, soil moisture, vegetation biomass and morphology. The field conditions during the campaign are characterized by several dry downs with a period of drought in the month of August. Peak biomass of the corn canopies was reached at July 24th with a total biomass of approximately 6.5 kg m-2. This dynamic range in both soil moisture and vegetation conditions within the data set is ideal for the validation of discrete medium vegetation scattering models. In this study, we compare the L band backscatter measurements with simulations by the Tor Vergata model (Ferrazzoli and Guerriero 1996). The measured soil moisture, vegetation biomass and most reliably measured vegetation morphological parameters (e.g. number of leaves, number of stems and stem height) were used as input for the Tor Vergata model. The more uncertain model parameters (e.g. surface roughness, leaf thickness) and the stem diameter were optimized using a parameter estimation routine based on the Levenberg-Marquardt algorithm. As cost function for this optimization, the HH and VV polarized backscatter measured and simulated by the Tor Vergata model for incidence angle of 15, 35 and 55 degrees were used (6 measurements in total). The calibrated Tor Vergata model simulations are in excellent agreement with the measurements of Root Mean Squared Differences (RMSD’s) of 0.8, 0.9 and 1.4 dB for incidences of 15, 35 and 55 degrees, respectively. The results from this study show that a physically based scattering model with the appropriate parameterization can accurately simulate backscatter measurements and, as such, have the potential of being used for the retrieval of biophysical variables (e.g. soil moisture and vegetation biomass).
Application of the Tor Vergata Scattering Model to L Band Backscatter During the Corn Growth Cycle
NASA Technical Reports Server (NTRS)
Joseph, A. T.; vanderVelde, R.; ONeill, P. E.; Lang, R.; Gish, T.
2010-01-01
At the USDA's Optimizing Production Inputs for Economic and Environmental Enhancement (OPE3) experimental site in Beltsville, Maryland, USA) a field campaign took place throughout the 2002 corn growth cycle from May 10th (emergence of corn crops) to October 2nd (harvest). One of the microwave instruments deployed was the multi-frequency (X-, C- and L-band) quad-polarized (HH, HV, VV, VH) NASA GSFC/George Washington University (GWU) truck mounted radar. During the field campaign, this radar system provided once a week fully polarized C- and L-band (4.75 and 1.6 GHz) backscatter measurements from incidence angle of 15, 35, and 55 degrees. In support of microwave observations, an extensive ground characterization took place, which included measurements of surface roughness, soil moisture, vegetation biomass and morphology. The field conditions during the campaign are characterized by several dry downs with a period of drought in the month of August. Peak biomass the corn canopies was reached on July 24th with a total biomass of approximately 6.5 kg/sq m. This dynamic range in both soil moisture and vegetation conditions within the data set is ideal for the validation of discrete medium vegetation scattering models. In this study, we compare the L band backscatter measurements with simulations by the Tor Vergata model (ferrazzoli and Guerriero 1996). The measured soil moisture, vegetation biomass and most reliably measured vegetation morphological parameters (e.g. number of leaves, number of stems and stem height) were used as input for the Tor Vergata model. The more uncertain model parameters (e.g. surface roughness, leaf thickness) and the stem diameter were optimized using a parameter estimation routine based on the Levenberg-Marquardt algorithm. As cost function for this optimization, the HH and VV polarized backscatter measured and stimulated by the TOR Vergata model for incidence angle of 15, 35, and 55 degrees were used (6 measurements in total). The calibrated Tor Vergata model simulations are in excellent agreement with the measurements of Root Mean Squared Differences (RMSD's) of 0.8, 0.9 and 1.4 dB for incidences of 15, 35 and 55 degrees, respectively. The results from this study that a physically based scattering model with the appropriate parameterization can accurately simulate backscatter measurements and, as such, have the potential of being used for the retrieval of biophysical variables (e.g. soil moisture and vegetation biomass).
Dual side control for inductive power transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron
An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
Application of control theory to dynamic systems simulation
NASA Technical Reports Server (NTRS)
Auslander, D. M.; Spear, R. C.; Young, G. E.
1982-01-01
The application of control theory is applied to dynamic systems simulation. Theory and methodology applicable to controlled ecological life support systems are considered. Spatial effects on system stability, design of control systems with uncertain parameters, and an interactive computing language (PARASOL-II) designed for dynamic system simulation, report quality graphics, data acquisition, and simple real time control are discussed.
Turbulence Characteristics of Swirling Flowfields. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Jackson, T. W.
1983-01-01
Combustor design phenomena; recirculating flows research; single-wire, six-orientation, eddy dissipation rate, and turbulence modeling measurement; directional sensitivity (DS); calibration equipment, confined jet facility, and hot-wire instrumentation; effects of swirl, strong contraction nozzle, and expansion ratio; and turbulence parameters; uncertain; and DS in laminar jets; turbulent nonswirling jets, and turbulent swirling jets are discussed.
Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...
2013-02-23
We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Li, S.
2014-12-01
Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest-SR scenarios, supercritical CO2 footprints are relatively insignificant by the end of monitoring.
NASA Astrophysics Data System (ADS)
Kissinger, A.; Walter, L.; Darcis, M.; Flemisch, B.; Class, H.
2012-04-01
Global climate change, shortage of resources and the resulting turn towards renewable sources of energy lead to a growing demand for the utilization of subsurface systems. Among these competing uses are Carbon Capture and Storage (CCS), geothermal energy, nuclear waste disposal, "renewable" methane or hydrogen storage as well as the ongoing production of fossil resources like oil, gas, and coal. Besides competing among themselves, these technologies may also create conflicts with essential public interests like water supply. For example, the injection of CO2 into the underground causes an increase in pressure reaching far beyond the actual radius of influence of the CO2 plume, potentially leading to large amounts of displaced salt water. Finding suitable sites is a demanding task for several reasons. Natural systems as opposed to technical systems are always characterized by heterogeneity. Therefore, parameter uncertainty impedes reliable predictions towards capacity and safety of a site. State of the art numerical simulations combined with stochastic approaches need to be used to obtain a more reliable assessment of the involved risks and the radii of influence of the different processes. These simulations may include the modeling of single- and multiphase non-isothermal flow, geo-chemical and geo-mechanical processes in order to describe all relevant physical processes adequately. Stochastic approaches have the aim to estimate a bandwidth of the key output parameters based on uncertain input parameters. Risks for these different underground uses can then be made comparable with each other. Along with the importance and the urgency of the competing processes this may lead to a more profound basis for a decision. Communicating risks to stake holders and a concerned public is crucial for the success of finding a suitable site for CCS (or other subsurface utilization). We present and discuss first steps towards an approach for addressing the issue of competitive utilization of the subsurface and the required process of communication between scientists, engineers, policy makers, and societies.
Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.
Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard
2011-12-01
Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE
Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.
Arunkumar, A; Sakthivel, R; Mathiyalagan, K; Park, Ju H
2014-07-01
This paper focuses the issue of robust stochastic stability for a class of uncertain fuzzy Markovian jumping discrete-time neural networks (FMJDNNs) with various activation functions and mixed time delay. By employing the Lyapunov technique and linear matrix inequality (LMI) approach, a new set of delay-dependent sufficient conditions are established for the robust stochastic stability of uncertain FMJDNNs. More precisely, the parameter uncertainties are assumed to be time varying, unknown and norm bounded. The obtained stability conditions are established in terms of LMIs, which can be easily checked by using the efficient MATLAB-LMI toolbox. Finally, numerical examples with simulation result are provided to illustrate the effectiveness and less conservativeness of the obtained results. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, Leimin; Shen, Yi; Sheng, Yin
2016-04-01
This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.