Xu, Shidong; Sun, Guanghui; Sun, Weichao
2017-01-01
In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Wang, Yingyang; Hu, Jianbo
2018-05-19
An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Biehler, J; Wall, W A
2018-02-01
If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.
Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector
NASA Astrophysics Data System (ADS)
Lenel, U. R.; Davies, D. G. S.; Moore, M. A.
An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Effects of uncertain topographic input data on two-dimensional flow modeling in a gravel-bed river
Legleiter, C.J.; Kyriakidis, P.C.; McDonald, R.R.; Nelson, J.M.
2011-01-01
Many applications in river research and management rely upon two-dimensional (2D) numerical models to characterize flow fields, assess habitat conditions, and evaluate channel stability. Predictions from such models are potentially highly uncertain due to the uncertainty associated with the topographic data provided as input. This study used a spatial stochastic simulation strategy to examine the effects of topographic uncertainty on flow modeling. Many, equally likely bed elevation realizations for a simple meander bend were generated and propagated through a typical 2D model to produce distributions of water-surface elevation, depth, velocity, and boundary shear stress at each node of the model's computational grid. Ensemble summary statistics were used to characterize the uncertainty associated with these predictions and to examine the spatial structure of this uncertainty in relation to channel morphology. Simulations conditioned to different data configurations indicated that model predictions became increasingly uncertain as the spacing between surveyed cross sections increased. Model sensitivity to topographic uncertainty was greater for base flow conditions than for a higher, subbankfull flow (75% of bankfull discharge). The degree of sensitivity also varied spatially throughout the bend, with the greatest uncertainty occurring over the point bar where the flow field was influenced by topographic steering effects. Uncertain topography can therefore introduce significant uncertainty to analyses of habitat suitability and bed mobility based on flow model output. In the presence of such uncertainty, the results of these studies are most appropriately represented in probabilistic terms using distributions of model predictions derived from a series of topographic realizations. Copyright 2011 by the American Geophysical Union.
Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.
2008-12-01
The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
NASA Astrophysics Data System (ADS)
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
Feedforward/feedback control synthesis for performance and robustness
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang
1990-01-01
Both feedforward and feedback control approaches for uncertain dynamical systems are investigated. The control design objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant modeling uncertainty. Preshapong of an ideal, time-optimal control input using a 'tapped-delay' filter is shown to provide a rapid maneuver with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. The proposed feedforward/feedback control approach is robust for a certain class of uncertain dynamical systems, since the control input command computed for a given desired output does not depend on the plant parameters.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
2013-04-22
Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with Nominal input L1 ‐based control generator This L1 adaptive control architecture uses data from the reference model
Dealing with uncertainty in modeling intermittent water supply
NASA Astrophysics Data System (ADS)
Lieb, A. M.; Rycroft, C.; Wilkening, J.
2015-12-01
Intermittency in urban water supply affects hundreds of millions of people in cities around the world, impacting water quality and infrastructure. Building on previous work to dynamically model the transient flows in water distribution networks undergoing frequent filling and emptying, we now consider the hydraulic implications of uncertain input data. Water distribution networks undergoing intermittent supply are often poorly mapped, and household metering frequently ranges from patchy to nonexistent. In the face of uncertain pipe material, pipe slope, network connectivity, and outflow, we investigate how uncertainty affects dynamical modeling results. We furthermore identify which parameters exert the greatest influence on uncertainty, helping to prioritize data collection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Jason; Winkler, Jon
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
Woods, Jason; Winkler, Jon
2018-01-31
Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
NASA Astrophysics Data System (ADS)
Chen, Liang-Ming; Lv, Yue-Yong; Li, Chuan-Jiang; Ma, Guang-Fu
2016-12-01
In this paper, we investigate cooperatively surrounding control (CSC) of multi-agent systems modeled by Euler-Lagrange (EL) equations under a directed graph. With the consideration of the uncertain dynamics in an EL system, a backstepping CSC algorithm combined with neural-networks is proposed first such that the agents can move cooperatively to surround the stationary target. Then, a command filtered backstepping CSC algorithm is further proposed to deal with the constraints on control input and the absence of neighbors’ velocity information. Numerical examples of eight satellites surrounding one space target illustrate the effectiveness of the theoretical results. Project supported by the National Basic Research Program of China (Grant No. 2012CB720000) and the National Natural Science Foundation of China (Grant Nos. 61304005 and 61403103).
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen
2015-01-01
Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...
Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.
2013-08-01
Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.
Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Rodney O.; Passalacqua, Alberto
2016-02-01
Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less
NASA Astrophysics Data System (ADS)
Taha, Ahmad Fayez
Transportation networks, wearable devices, energy systems, and the book you are reading now are all ubiquitous cyber-physical systems (CPS). These inherently uncertain systems combine physical phenomena with communication, data processing, control and optimization. Many CPSs are controlled and monitored by real-time control systems that use communication networks to transmit and receive data from systems modeled by physical processes. Existing studies have addressed a breadth of challenges related to the design of CPSs. However, there is a lack of studies on uncertain CPSs subject to dynamic unknown inputs and cyber-attacks---an artifact of the insertion of communication networks and the growing complexity of CPSs. The objective of this dissertation is to create secure, computational foundations for uncertain CPSs by establishing a framework to control, estimate and optimize the operation of these systems. With major emphasis on power networks, the dissertation deals with the design of secure computational methods for uncertain CPSs, focusing on three crucial issues---(1) cyber-security and risk-mitigation, (2) network-induced time-delays and perturbations and (3) the encompassed extreme time-scales. The dissertation consists of four parts. In the first part, we investigate dynamic state estimation (DSE) methods and rigorously examine the strengths and weaknesses of the proposed routines under dynamic attack-vectors and unknown inputs. In the second part, and utilizing high-frequency measurements in smart grids and the developed DSE methods in the first part, we present a risk mitigation strategy that minimizes the encountered threat levels, while ensuring the continual observability of the system through available, safe measurements. The developed methods in the first two parts rely on the assumption that the uncertain CPS is not experiencing time-delays, an assumption that might fail under certain conditions. To overcome this challenge, networked unknown input observers---observers/estimators for uncertain CPSs---are designed such that the effect of time-delays and cyber-induced perturbations are minimized, enabling secure DSE and risk mitigation in the first two parts. The final part deals with the extreme time-scales encompassed in CPSs, generally, and smart grids, specifically. Operational decisions for long time-scales can adversely affect the security of CPSs for faster time-scales. We present a model that jointly describes steady-state operation and transient stability by combining convex optimal power flow with semidefinite programming formulations of an optimal control problem. This approach can be jointly utilized with the aforementioned parts of the dissertation work, considering time-delays and DSE. The research contributions of this dissertation furnish CPS stakeholders with insights on the design and operation of uncertain CPSs, whilst guaranteeing the system's real-time safety. Finally, although many of the results of this dissertation are tailored to power systems, the results are general enough to be applied for a variety of uncertain CPSs.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
NASA Astrophysics Data System (ADS)
Koo, Min-Sung; Choi, Ho-Lim
2018-01-01
In this paper, we consider a control problem for a class of uncertain nonlinear systems in which there exists an unknown time-varying delay in the input and lower triangular nonlinearities. Usually, in the existing results, input delays have been coupled with feedforward (or upper triangular) nonlinearities; in other words, the combination of lower triangular nonlinearities and input delay has been rare. Motivated by the existing controller for input-delayed chain of integrators with nonlinearity, we show that the control of input-delayed nonlinear systems with two particular types of lower triangular nonlinearities can be done. As a control solution, we propose a newly designed feedback controller whose main features are its dynamic gain and non-predictor approach. Three examples are given for illustration.
QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices
NASA Technical Reports Server (NTRS)
Hess, R. A.; Henderson, D. K.
1996-01-01
A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1990-01-01
A game theoretic controller is developed for a linear time-invariant system with parameter uncertainties in system and input matrices. The input-output decomposition modeling for the plant uncertainty is adopted. The uncertain dynamic system is represented as an internal feedback loop in which the system is assumed forced by fictitious disturbance caused by the parameter uncertainty. By considering the input and the fictitious disturbance as two noncooperative players, a differential game problem is constructed. It is shown that the resulting time invariant controller stabilizes the uncertain system for a prescribed uncertainty bound. This game theoretic controller is applied to the momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Inclusion of the external disturbance torque to the design procedure results in a dynamical feedback controller which consists of conventional PID control and cyclic disturbance rejection filter. It is shown that the game theoretic design, comparing to the LQR design or pole placement design, improves the stability robustness with respect to inertia variations.
Effective techniques for the identification and accommodation of disturbances
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1989-01-01
The successful control of dynamic systems such as space stations, or launch vehicles, requires a controller design methodology that acknowledges and addresses the disruptive effects caused by external and internal disturbances that inevitably act on such systems. These disturbances, technically defined as uncontrollable inputs, typically vary with time in an uncertain manner and usually cannot be directly measured in real time. A relatively new non-statistical technique for modeling, and (on-line) identification, of those complex uncertain disturbances that are not as erratic and capricious as random noise is described. This technique applies to multi-input cases and to many of the practical disturbances associated with the control of space stations, or launch vehicles. Then, a collection of smart controller design techniques that allow controlled dynamic systems, with possible multi-input controls, to accommodate (cope with) such disturbances with extraordinary effectiveness are associated. These new smart controllers are designed by non-statistical techniques and typically turn out to be unconventional forms of dynamic linear controllers (compensators) with constant coefficients. The simplicity and reliability of linear, constant coefficient controllers is well-known in the aerospace field.
Adaptive Control for Uncertain Nonlinear Multi-Input Multi-Output Systems
NASA Technical Reports Server (NTRS)
Cao, Chengyu (Inventor); Hovakimyan, Naira (Inventor); Xargay, Enric (Inventor)
2014-01-01
Systems and methods of adaptive control for uncertain nonlinear multi-input multi-output systems in the presence of significant unmatched uncertainty with assured performance are provided. The need for gain-scheduling is eliminated through the use of bandwidth-limited (low-pass) filtering in the control channel, which appropriately attenuates the high frequencies typically appearing in fast adaptation situations and preserves the robustness margins in the presence of fast adaptation.
Terminal sliding mode tracking control for a class of SISO uncertain nonlinear systems.
Chen, Mou; Wu, Qing-Xian; Cui, Rong-Xin
2013-03-01
In this paper, the terminal sliding mode tracking control is proposed for the uncertain single-input and single-output (SISO) nonlinear system with unknown external disturbance. For the unmeasured disturbance of nonlinear systems, terminal sliding mode disturbance observer is presented. The developed disturbance observer can guarantee the disturbance approximation error to converge to zero in the finite time. Based on the output of designed disturbance observer, the terminal sliding mode tracking control is presented for uncertain SISO nonlinear systems. Subsequently, terminal sliding mode tracking control is developed using disturbance observer technique for the uncertain SISO nonlinear system with control singularity and unknown non-symmetric input saturation. The effects of the control singularity and unknown input saturation are combined with the external disturbance which is approximated using the disturbance observer. Under the proposed terminal sliding mode tracking control techniques, the finite time convergence of all closed-loop signals are guaranteed via Lyapunov analysis. Numerical simulation results are given to illustrate the effectiveness of the proposed terminal sliding mode tracking control. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Khazaee, Mostafa; Markazi, Amir H D; Omidi, Ehsan
2015-11-01
In this paper, a new Adaptive Fuzzy Predictive Sliding Mode Control (AFP-SMC) is presented for nonlinear systems with uncertain dynamics and unknown input delay. The control unit consists of a fuzzy inference system to approximate the ideal linearization control, together with a switching strategy to compensate for the estimation errors. Also, an adaptive fuzzy predictor is used to estimate the future values of the system states to compensate for the time delay. The adaptation laws are used to tune the controller and predictor parameters, which guarantee the stability based on a Lyapunov-Krasovskii functional. To evaluate the method effectiveness, the simulation and experiment on an overhead crane system are presented. According to the obtained results, AFP-SMC can effectively control the uncertain nonlinear systems, subject to input delays of known bound. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Modeling transport phenomena and uncertainty quantification in solidification processes
NASA Astrophysics Data System (ADS)
Fezi, Kyle S.
Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
Orbit control of a stratospheric satellite with parameter uncertainties
NASA Astrophysics Data System (ADS)
Xu, Ming; Huo, Wei
2016-12-01
When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
2006-12-01
on at any time from a family of candidate feedback-gains so as to control a discrete- time input-saturated LTI system possibly subject to persistent... times robustness Mosca, E. (2006) Control of Uncertain Systems under Constraints: Switching Horizon Predictive Control of Persistently Disturbed...feedback controls u = f(x̂) (3) so as to ensure, under suitable conditions, stability in the noiseless case as well as finite l∞-induced gain of the
Sarhadi, Pouria; Noei, Abolfazl Ranjbar; Khosravi, Alireza
2016-11-01
Input saturations and uncertain dynamics are among the practical challenges in control of autonomous vehicles. Adaptive control is known as a proper method to deal with the uncertain dynamics of these systems. Therefore, incorporating the ability to confront with input saturation in adaptive controllers can be valuable. In this paper, an adaptive autopilot is presented for the pitch and yaw channels of an autonomous underwater vehicle (AUV) in the presence of input saturations. This will be performed by combination of a model reference adaptive control (MRAC) with integral state feedback with a modern anti-windup (AW) compensator. MRAC with integral state feedback is commonly used in autonomous vehicles. However, some proper modifications need to be taken into account in order to cope with the saturation problem. To this end, a Riccati-based anti-windup (AW) compensator is employed. The presented technique is applied to the non-linear six degrees of freedom (DOF) model of an AUV and the obtained results are compared with that of its baseline method. Several simulation scenarios are executed in the pitch and yaw channels to evaluate the controller performance. Moreover, effectiveness of proposed adaptive controller is comprehensively investigated by implementing Monte Carlo simulations. The obtained results verify the performance of proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions
NASA Astrophysics Data System (ADS)
Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.
2017-12-01
Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
Robust H ∞ Control for Spacecraft Rendezvous with a Noncooperative Target
Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang
2013-01-01
The robust H ∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H ∞ performance and finite time performance are proposed, and a robust H ∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller. PMID:24027446
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.
Singh, Raj Mohan
2008-04-01
Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.
NASA Astrophysics Data System (ADS)
Azizi, S.; Torres, L. A. B.; Palhares, R. M.
2018-01-01
The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.
Song, Qi; Song, Yong-Duan
2011-12-01
This paper investigates the position and velocity tracking control problem of high-speed trains with multiple vehicles connected through couplers. A dynamic model reflecting nonlinear and elastic impacts between adjacent vehicles as well as traction/braking nonlinearities and actuation faults is derived. Neuroadaptive fault-tolerant control algorithms are developed to account for various factors such as input nonlinearities, actuator failures, and uncertain impacts of in-train forces in the system simultaneously. The resultant control scheme is essentially independent of system model and is primarily data-driven because with the appropriate input-output data, the proposed control algorithms are capable of automatically generating the intermediate control parameters, neuro-weights, and the compensation signals, literally producing the traction/braking force based upon input and response data only--the whole process does not require precise information on system model or system parameter, nor human intervention. The effectiveness of the proposed approach is also confirmed through numerical simulations.
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
Understanding earth system models: how Global Sensitivity Analysis can help
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Application of TREECS Modeling System to Strontium-90 for Borschi Watershed near Chernobyl, Ukraine.
Johnson, Billy E; Dortch, Mark S
2014-05-01
The Training Range Environmental Evaluation and Characterization System (TREECS™) (http://el.erdc.usace.army.mil/treecs/) is being developed by the U.S. Army Engineer Research and Development Center (ERDC) for the U.S. Army to forecast the fate of munitions constituents (MC) (such as high explosives (HE) and metals) found on firing/training ranges, as well as those subsequently transported to surface water and groundwater. The overall purpose of TREECS™ is to provide environmental specialists with tools to assess the potential for MC migration into surface water and groundwater systems and to assess range management strategies to ensure protection of human health and the environment. The multimedia fate/transport models within TREECS™ are mathematical models of reduced form (e.g., reduced dimensionality) that allow rapid application with less input data requirements compared with more complicated models. Although TREECS™ was developed for the fate of MC from military ranges, it has general applicability to many other situations requiring prediction of contaminant (including radionuclide) fate in multi-media environmental systems. TREECS™ was applied to the Borschi watershed near the Chernobyl Nuclear Power Plant, Ukraine. At this site, TREECS™ demonstrated its use as a modeling tool to predict the fate of strontium 90 ((90)Sr). The most sensitive and uncertain input for this application was the soil-water partitioning distribution coefficient (Kd) for (90)Sr. The TREECS™ soil model provided reasonable estimates of the surface water export flux of (90)Sr from the Borschi watershed when using a Kd for (90)Sr of 200 L/kg. The computed export for the year 2000 was 0.18% of the watershed inventory of (90)Sr compared to the estimated export flux of 0.14% based on field data collected during 1999-2001. The model indicated that assumptions regarding the form of the inventory, whether dissolved or in solid phase form, did not appreciably affect export rates. Also, the percentage of non-exchangeable adsorbed (90)Sr, which is uncertain and affects the amount of (90)Sr available for export, was fixed at 20% based on field data measurements. A Monte Carlo uncertainty analysis was conducted treating Kd as an uncertain input variable with a range of 100-300 L/kg. This analysis resulted in a range of 0.13-0.27% of inventory exported to surface water compared to 0.14% based on measured field data. Based on this model application, it was concluded that the export of (90)Sr from the Borschi watershed to surface water is predominantly a result of soil pore water containing dissolved (90)Sr being diverted to surface waters that eventually flow out of the watershed. The percentage of non-exchangeable adsorbed (90)Sr and the soil-water Kd are the two most sensitive and uncertain factors affecting the amount of export. The 200-year projections of the model showed an exponential decline in (90)Sr export fluxes from the watershed that should drop by a factor of 10 by the year 2100. This presentation will focus on TREECS capabilities and the case study done for the Borschi Watershed. Published by Elsevier Ltd.
Spatial planning using probabilistic flood maps
NASA Astrophysics Data System (ADS)
Alfonso, Leonardo; Mukolwe, Micah; Di Baldassarre, Giuliano
2015-04-01
Probabilistic flood maps account for uncertainty in flood inundation modelling and convey a degree of certainty in the outputs. Major sources of uncertainty include input data, topographic data, model structure, observation data and parametric uncertainty. Decision makers prefer less ambiguous information from modellers; this implies that uncertainty is suppressed to yield binary flood maps. Though, suppressing information may potentially lead to either surprise or misleading decisions. Inclusion of uncertain information in the decision making process is therefore desirable and transparent. To this end, we utilise the Prospect theory and information from a probabilistic flood map to evaluate potential decisions. Consequences related to the decisions were evaluated using flood risk analysis. Prospect theory explains how choices are made given options for which probabilities of occurrence are known and accounts for decision makers' characteristics such as loss aversion and risk seeking. Our results show that decision making is pronounced when there are high gains and loss, implying higher payoffs and penalties, therefore a higher gamble. Thus the methodology may be appropriately considered when making decisions based on uncertain information.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
NASA Technical Reports Server (NTRS)
Brophy, J. R., Jr.; Wilbur, P. J.
1980-01-01
A simple theoretical model which can be used as an aid in the design of the baffle aperture region of a hollow cathode equipped ion thruster was developed. An analysis of the ion and electron currents in both the main and cathode discharge chambers is presented. From this analysis a model of current flow through the aperture, which is required as an input to the design model, was developed. This model was verified experimentally. The dominant force driving electrons through the aperture was the force due to the electrical potential gradient. The diffusion process was modeled according to the Bolm diffusion theory. A number of simplifications were made to limit the amount of detailed plasma information required as input to the model to facilitate the use of the model in thruster design. This simplified model gave remarkably consistant results with experimental results obtained with a given thruster geometry over substantial changes in operating conditions. The model was uncertain to about a factor of two for different thruster cathode region geometries. The design usefulness was limited by this factor of two uncertainty and by the accuracy to which the plasma parameters required as inputs to the model were specified.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Anticipatory Emotions in Decision Tasks: Covert Markers of Value or Attentional Processes?
Davis, Tyler; Love, Bradley C.; Maddox, Todd
2009-01-01
Anticipatory emotions precede behavioral outcomes and provide a means to infer interactions between emotional and cognitive processes. A number of theories hold that anticipatory emotions serve as inputs to the decision process and code the value or risk associated with a stimulus. We argue that current data do not unequivocally support this theory. We present an alternative theory whereby anticipatory emotions reflect the outcome of a decision process and serve to ready the subject for new information when making an uncertain response. We test these two accounts, which we refer to as emotions-as-input and emotions-as-outcome, in a task that allows risky stimuli to be dissociated from uncertain responses. We find that emotions are associated with responses as opposed to stimuli. This finding is contrary to the emotions-as-input perspective as it shows that emotions arise from decision processes. PMID:19428002
Choi, Yun Ho; Yoo, Sung Jin
2018-06-01
This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Feedback system design with an uncertain plant
NASA Technical Reports Server (NTRS)
Milich, D.; Valavani, L.; Athans, M.
1986-01-01
A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
NASA Astrophysics Data System (ADS)
Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter
2016-10-01
Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.
Li, Dong-Juan; Li, Da-Peng
2017-09-14
In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Effects of modeling errors on trajectory predictions in air traffic control automation
NASA Technical Reports Server (NTRS)
Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda
1996-01-01
Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
NASA Astrophysics Data System (ADS)
Riegels, Niels; Jessen, Oluf; Madsen, Henrik
2016-04-01
A multi-objective robust decision making approach is demonstrated that supports seasonal water management in the Chao Phraya River basin in Thailand. The approach uses multi-objective optimization to identify a Pareto-optimal set of management alternatives. Ensemble simulation is used to evaluate how each member of the Pareto set performs under a range of uncertain future conditions, and a robustness criterion is used to select a preferred alternative. Data mining tools are then used to identify ranges of uncertain factor values that lead to unacceptable performance for the preferred alternative. The approach is compared to a multi-criteria scenario analysis approach to estimate whether the introduction of additional complexity has the potential to improve decision making. Dry season irrigation in Thailand is managed through non-binding recommendations about the maximum extent of rice cultivation along with incentives for less water-intensive crops. Management authorities lack authority to prevent river withdrawals for irrigation when rice cultivation exceeds recommendations. In practice, this means that water must be provided to irrigate the actual planted area because of downstream municipal water supply requirements and water quality constraints. This results in dry season reservoir withdrawals that exceed planned withdrawals, reducing carryover storage to hedge against insufficient wet season runoff. The dry season planning problem in Thailand can therefore be framed in terms of decisions, objectives, constraints, and uncertainties. Decisions include recommendations about the maximum extent of rice cultivation and incentives for growing less water-intensive crops. Objectives are to maximize benefits to farmers, minimize the risk of inadequate carryover storage, and minimize incentives. Constraints include downstream municipal demands and water quality requirements. Uncertainties include the actual extent of rice cultivation, dry season precipitation, and precipitation in the following wet season. The multi-objective robust decision making approach is implemented as follows. First, three baseline simulation models are developed, including a crop water demand model, a river basin simulation model, and model of the impact of incentives on cropping patterns. The crop water demand model estimates irrigation water demands; the river basin simulation model estimates reservoir drawdown required to meet demands given forecasts of precipitation, evaporation, and runoff; the model of incentive impacts estimates the cost of incentives as function of marginal changes in rice yields. Optimization is used to find a set of non-dominated alternatives as a function of rice area and incentive decisions. An ensemble of uncertain model inputs is generated to represent uncertain hydrological and crop area forecasts. An ensemble of indicator values is then generated for each of the decision objectives: farmer benefits, end-of-wet-season reservoir storage, and the cost of incentives. A single alternative is selected from the Pareto set using a robustness criterion. Threshold values are defined for each of the objectives to identify ensemble members for which objective values are unacceptable, and the PRIM data mining algorithm is then used to identify input values associated with unacceptable model outcomes.
Robust optimization based energy dispatch in smart grids considering demand uncertainty
NASA Astrophysics Data System (ADS)
Nassourou, M.; Puig, V.; Blesa, J.
2017-01-01
In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Adaptive and neuroadaptive control for nonnegative and compartmental dynamical systems
NASA Astrophysics Data System (ADS)
Volyanskyy, Kostyantyn Y.
Neural networks have been extensively used for adaptive system identification as well as adaptive and neuroadaptive control of highly uncertain systems. The goal of adaptive and neuroadaptive control is to achieve system performance without excessive reliance on system models. To improve robustness and the speed of adaptation of adaptive and neuroadaptive controllers several controller architectures have been proposed in the literature. In this dissertation, we develop a new neuroadaptive control architecture for nonlinear uncertain dynamical systems. The proposed framework involves a novel controller architecture with additional terms in the update laws that are constructed using a moving window of the integrated system uncertainty. These terms can be used to identify the ideal system weights of the neural network as well as effectively suppress system uncertainty. Linear and nonlinear parameterizations of the system uncertainty are considered and state and output feedback neuroadaptive controllers are developed. Furthermore, we extend the developed framework to discrete-time dynamical systems. To illustrate the efficacy of the proposed approach we apply our results to an aircraft model with wing rock dynamics, a spacecraft model with unknown moment of inertia, and an unmanned combat aerial vehicle undergoing actuator failures, and compare our results with standard neuroadaptive control methods. Nonnegative systems are essential in capturing the behavior of a wide range of dynamical systems involving dynamic states whose values are nonnegative. A sub-class of nonnegative dynamical systems are compartmental systems. These systems are derived from mass and energy balance considerations and are comprised of homogeneous interconnected microscopic subsystems or compartments which exchange variable quantities of material via intercompartmental flow laws. In this dissertation, we develop direct adaptive and neuroadaptive control framework for stabilization, disturbance rejection and noise suppression for nonnegative and compartmental dynamical systems with noise and exogenous system disturbances. We then use the developed framework to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of continuing hemorrhage and hemodilution. Critical care patients, whether undergoing surgery or recovering in intensive care units, require drug administration to regulate physiological variables such as blood pressure, cardiac output, heart rate, and degree of consciousness. The rate of infusion of each administered drug is critical, requiring constant monitoring and frequent adjustments. In this dissertation, we develop a neuroadaptive output feedback control framework for nonlinear uncertain nonnegative and compartmental systems with nonnegative control inputs and noisy measurements. The proposed framework is Lyapunov-based and guarantees ultimate boundedness of the error signals. In addition, the neuroadaptive controller guarantees that the physical system states remain in the nonnegative orthant of the state space. Finally, the developed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of noisy electroencephalographic (EEG) measurements. Clinical trials demonstrate excellent regulation of unconsciousness allowing for a safe and effective administration of the anesthetic agent propofol. Furthermore, a neuroadaptive output feedback control architecture for nonlinear nonnegative dynamical systems with input amplitude and integral constraints is developed. Specifically, the neuroadaptive controller guarantees that the imposed amplitude and integral input constraints are satisfied and the physical system states remain in the nonnegative orthant of the state space. The proposed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for noncardiac surgery in the face of infusion rate constraints and a drug dosing constraint over a specified period. In addition, the aforementioned control architecture is used to control lung volume and minute ventilation with input pressure constraints that also accounts for spontaneous breathing by the patient. Specifically, we develop a pressure- and work-limited neuroadaptive controller for mechanical ventilation based on a nonlinear multi-compartmental lung model. The control framework does not rely on any averaged data and is designed to automatically adjust the input pressure to the patient's physiological characteristics capturing lung resistance and compliance modeling uncertainty. Moreover, the controller accounts for input pressure constraints as well as work of breathing constraints. The effect of spontaneous breathing is incorporated within the lung model and the control framework. Finally, a neural network hybrid adaptive control framework for nonlinear uncertain hybrid dynamical systems is developed. The proposed hybrid adaptive control framework is Lyapunov-based and guarantees partial asymptotic stability of the closed-loop hybrid system; that is, asymptotic stability with respect to part of the closed-loop system states associated with the hybrid plant states. A numerical example is provided to demonstrate the efficacy of the proposed hybrid adaptive stabilization approach.
Robust adaptive sliding mode control for uncertain systems with unknown time-varying delay input.
Benamor, Anouar; Messaoud, Hassani
2018-05-02
This article focuses on robust adaptive sliding mode control law for uncertain discrete systems with unknown time-varying delay input, where the uncertainty is assumed unknown. The main results of this paper are divided into three phases. In the first phase, we propose a new sliding surface is derived within the Linear Matrix Inequalities (LMIs). In the second phase, using the new sliding surface, the novel Robust Sliding Mode Control (RSMC) is proposed where the upper bound of uncertainty is supposed known. Finally, the novel approach of Robust Adaptive Sliding ModeControl (RASMC) has been defined for this type of systems, where the upper limit of uncertainty which is assumed unknown. In this new approach, we have estimate the upper limit of uncertainties and we have determined the control law based on a sliding surface that will converge to zero. This novel control laws are been validated in simulation on an uncertain numerical system with good results and comparative study. This efficiency is emphasized through the application of the new controls on the two physical systems which are the process trainer PT326 and hydraulic system two tanks. Published by Elsevier Ltd.
Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, J E; Fratoni, M; Kramer, K J
A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles
NASA Astrophysics Data System (ADS)
Wilcox, Zachary Donald
The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
Hancock, G R; Verdon-Kidd, D; Lowry, J B C
2017-12-01
Landscape Evolution Modelling (LEM) technologies provide a means by which it is possible to simulate the long-term geomorphic stability of a conceptual rehabilitated landform. However, simulations rarely consider the potential effects of anthropogenic climate change and consequently risk not accounting for the range of rainfall variability that might be expected in both the near and far future. One issue is that high resolution (both spatial and temporal) rainfall projections incorporating the potential effects of greenhouse forcing are required as input. However, projections of rainfall change are still highly uncertain for many regions, particularly at sub annual/seasonal scales. This is the case for northern Australia, where a decrease or an increase in rainfall post 2030 is considered equally likely based on climate model simulations. The aim of this study is therefore to investigate a spatial analogue approach to develop point scale hourly rainfall scenarios to be used as input to the CAESAR - Lisflood LEM to test the sensitivity of the geomorphic stability of a conceptual rehabilitated landform to potential changes in climate. Importantly, the scenarios incorporate the range of projected potential increase/decrease in rainfall for northern Australia and capture the expected envelope of erosion rates and erosion patterns (i.e. where erosion and deposition occurs) over a 100year modelled period. We show that all rainfall scenarios produce sediment output and gullying greater than that of the surrounding natural system, however a 'wetter' future climate produces the highest output. Importantly, incorporating analogue rainfall scenarios into LEM has the capacity to both improve landform design and enhance the modelling software. Further, the method can be easily transferred to other sites (both nationally and internationally) where rainfall variability is significant and climate change impacts are uncertain. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
1984-12-01
input/output relationship. These are obtained from the design specifications (10:68i-684). Note that the first digit of the subscript of bkj refers...to the output and the second digit to the input. Thus, bkj is.a function of the response requirements on the output, Yk’ due to the input, r.. 169 . A...NXPMAX pNYPMAX, IPLOT) C C C* LIBARY OF PLOT SUBR(OUTINES PSNTCT NLIEPRINTER ONLY~ C* C C C SUP’ LPLOTS C C C DIMENSION IXY(101,71)918UF(100) COMMON /HOPY
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
NASA Astrophysics Data System (ADS)
Yi, Bowen; Lin, Shuyi; Yang, Bo; Zhang, Weidong
2018-02-01
This paper presents an output feedback indirect dynamic inversion (IDI) approach for a class of uncertain nonaffine systems with input unmodelled dynamics. Compared with previous approaches to achieve performance recovery, the proposed method aims at dealing with a broader class of nonaffine-in-control systems with triangular structure. An IDI state feedback law is designed first, in which less knowledge of the model plant is needed compared to earlier approximate dynamic inversion methods, thus yielding more robust performance. After that, an extended high-gain observer is designed to accomplish the task with output feedback. Finally, we prove that the designed IDI controller is equivalent to an adaptive proportional-integral (PI) controller, with respect to both time response equivalence and robustness equivalence. The conclusion implies that for the studied strict-feedback non-affine systems with unmodelled dynamics, there always exits a PI controller to stabilise the systems. The effectiveness and benefits of the designed approach are verified by three examples.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
The Importance of Studying Past Extreme Floods to Prepare for Uncertain Future Extremes
NASA Astrophysics Data System (ADS)
Burges, S. J.
2016-12-01
Hoyt and Langbein, 1955 in their book `Floods' wrote: " ..meteorologic and hydrologic conditions will combine to produce superfloods of unprecedented magnitude. We have every reason to believe that in most rivers past floods may not be an accurate measure of ultimate flood potentialities. It is this superflood with which we are always most concerned". I provide several examples to offer some historical perspective on assessing extreme floods. In one example, flooding in the Miami Valley, OH in 1913 claimed 350 lives. The engineering and socio-economic challenges facing the Morgan Engineering Co in how to mitigate against future flood damage and loss of life when limited information was available provide guidance about ways to face an uncertain hydroclimate future, particularly one of a changed climate. A second example forces us to examine mixed flood populations and illustrates the huge uncertainty in assigning flood magnitude and exceedance probability to extreme floods in such cases. There is large uncertainty in flood frequency estimates; knowledge of the total flood hydrograph, not the peak flood flow rate alone, is what is needed for hazard mitigation assessment or design. Some challenges in estimating the complete flood hydrograph in an uncertain future climate, including demands on hydrologic models and their inputs, are addressed.
The 32nd CDC: System identification using interval dynamic models
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.
1992-01-01
Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.
Wang, Jianhui; Liu, Zhi; Chen, C L Philip; Zhang, Yun
2017-10-12
Hysteresis exists ubiquitously in physical actuators. Besides, actuator failures/faults may also occur in practice. Both effects would deteriorate the transient tracking performance, and even trigger instability. In this paper, we consider the problem of compensating for actuator failures and input hysteresis by proposing a fuzzy control scheme for stochastic nonlinear systems. Compared with the existing research on stochastic nonlinear uncertain systems, it is found that how to guarantee a prescribed transient tracking performance when taking into account actuator failures and hysteresis simultaneously also remains to be answered. Our proposed control scheme is designed on the basis of the fuzzy logic system and backstepping techniques for this purpose. It is proven that all the signals remain bounded and the tracking error is ensured to be within a preestablished bound with the failures of hysteretic actuator. Finally, simulations are provided to illustrate the effectiveness of the obtained theoretical results.
Observer-based state tracking control of uncertain stochastic systems via repetitive controller
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Susana Ramya, L.; Selvaraj, P.
2017-08-01
This paper develops the repetitive control scheme for state tracking control of uncertain stochastic time-varying delay systems via equivalent-input-disturbance approach. The main purpose of this work is to design a repetitive controller to guarantee the tracking performance under the effects of unknown disturbances with bounded frequency and parameter variations. Specifically, a new set of linear matrix inequality (LMI)-based conditions is derived based on the suitable Lyapunov-Krasovskii functional theory for designing a repetitive controller which guarantees stability and desired tracking performance. More precisely, an equivalent-input-disturbance estimator is incorporated into the control design to reduce the effect of the external disturbances. Simulation results are provided to demonstrate the desired control system stability and their tracking performance. A practical stream water quality preserving system is also provided to show the effectiveness and advantage of the proposed approach.
NASA Astrophysics Data System (ADS)
Léchappé, V.; Moulay, E.; Plestan, F.
2018-06-01
The stability of a prediction-based controller for linear time-invariant (LTI) systems is studied in the presence of time-varying input and output delays. The uncertain delay case is treated as well as the partial state knowledge case. The reduction method is used in order to prove the convergence of the closed-loop system including the state observer, the predictor and the plant. Explicit conditions that guarantee the closed-loop stability are given, thanks to a Lyapunov-Razumikhin analysis. Simulations illustrate the theoretical results.
Simplex-stochastic collocation method with improved scalability
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Dwight, R. P.; Cinnella, P.
2016-04-01
The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.
Rising temperatures reduce global wheat production
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.; Reynolds, M. P.; Alderman, P. D.; Prasad, P. V. V.; Aggarwal, P. K.; Anothai, J.; Basso, B.; Biernath, C.; Challinor, A. J.; de Sanctis, G.; Doltra, J.; Fereres, E.; Garcia-Vila, M.; Gayler, S.; Hoogenboom, G.; Hunt, L. A.; Izaurralde, R. C.; Jabloun, M.; Jones, C. D.; Kersebaum, K. C.; Koehler, A.-K.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Palosuo, T.; Priesack, E.; Eyshi Rezaei, E.; Ruane, A. C.; Semenov, M. A.; Shcherbak, I.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Thorburn, P. J.; Waha, K.; Wang, E.; Wallach, D.; Wolf, J.; Zhao, Z.; Zhu, Y.
2015-02-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 °C to 32 °C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each °C of further temperature increase and become more variable over space and time.
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Rising Temperatures Reduce Global Wheat Production
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.;
2015-01-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 degrees C to 32? degrees C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each degree C of further temperature increase and become more variable over space and time.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
NASA Astrophysics Data System (ADS)
Truong, Bui Ngoc Minh; Nam, Doan Ngoc Chi; Ahn, Kyoung Kwan
2013-09-01
Dielectric electro-active polymer (DEAP) materials are attractive since they are low cost, lightweight and have a large deformation capability. They have no operating noise, very low electric power consumption and higher performance and efficiency than competing technologies. However, DEAP materials generally have strong hysteresis as well as uncertain and nonlinear characteristics. These disadvantages can limit the efficiency in the use of DEAP materials. To address these limitations, this research will present the combination of the Preisach model and the dynamic nonlinear autoregressive exogenous (NARX) fuzzy model-based adaptive particle swarm optimization (APSO) identification algorithm for modeling and identification of the nonlinear behavior of one typical type of DEAP actuator. Firstly, open loop input signals are applied to obtain nonlinear features and to investigate the responses of the DEAP actuator system. Then, a Preisach model can be combined with a dynamic NARX fuzzy structure to estimate the tip displacement of a DEAP actuator. To optimize all unknown parameters of the designed combination, an identification scheme based on a least squares method and an APSO algorithm is carried out. Finally, experimental validation research is carefully completed, and the effectiveness of the proposed model is evaluated by employing various input signals.
Airborne measurements of organic bromine compounds in the Pacific tropical tropopause layer
Navarro, Maria A.; Atlas, Elliot L.; Saiz-Lopez, Alfonso; Rodriguez-Lloveras, Xavier; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Filus, Michal; Harris, Neil R. P.; Meneguz, Elena; Ashfold, Matthew J.; Manning, Alistair J.; Cuevas, Carlos A.; Schauffler, Sue M.; Donets, Valeria
2015-01-01
Very short-lived brominated substances (VSLBr) are an important source of stratospheric bromine, an effective ozone destruction catalyst. However, the accurate estimation of the organic and inorganic partitioning of bromine and the input to the stratosphere remains uncertain. Here, we report near-tropopause measurements of organic brominated substances found over the tropical Pacific during the NASA Airborne Tropical Tropopause Experiment campaigns. We combine aircraft observations and a chemistry−climate model to quantify the total bromine loading injected to the stratosphere. Surprisingly, despite differences in vertical transport between the Eastern and Western Pacific, VSLBr (organic + inorganic) contribute approximately similar amounts of bromine [∼6 (4−9) parts per thousand] to the stratospheric input at the tropical tropopause. These levels of bromine cause substantial ozone depletion in the lower stratosphere, and any increases in future abundances (e.g., as a result of aquaculture) will lead to larger depletions. PMID:26504212
Command Filtering-Based Fuzzy Control for Nonlinear Systems With Saturation Input.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Lin, Chong
2017-09-01
In this paper, command filtering-based fuzzy control is designed for uncertain multi-input multioutput (MIMO) nonlinear systems with saturation nonlinearity input. First, the command filtering method is employed to deal with the explosion of complexity caused by the derivative of virtual controllers. Then, fuzzy logic systems are utilized to approximate the nonlinear functions of MIMO systems. Furthermore, error compensation mechanism is introduced to overcome the drawback of the dynamics surface approach. The developed method will guarantee all signals of the systems are bounded. The effectiveness and advantages of the theoretic result are obtained by a simulation example.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Adaptive Control with Reference Model Modification
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Nimick, David A.; McCarthy, Peter M.; Fields, Vanessa
2011-01-01
Benton Lake National Wildlife Refuge is an important area for waterfowl production and migratory stopover in west-central Montana. Eight wetland units covering about 5,600 acres are the essential features of the refuge. Water availability for the wetland units can be uncertain owing to the large natural variations in precipitation and runoff and the high cost of pumping supplemental water. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, has developed a digital model for planning water management. The model can simulate strategies for water transfers among the eight wetland units and account for variability in runoff and pumped water. This report describes this digital model, which uses a water-accounting spreadsheet to track inputs and outputs to each of the wetland units of Benton Lake National Wildlife Refuge. Inputs to the model include (1) monthly values for precipitation, pumped water, runoff, and evaporation; (2) water-level/capacity data for each wetland unit; and (3) the pan-evaporation coefficient. Outputs include monthly water volume and flooded surface area for each unit for as many as 5 consecutive years. The digital model was calibrated by comparing simulated and historical measured water volumes for specific test years.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong
2018-03-01
Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Partnership for Edge Physics (EPSI), University of Texas Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Robert; Carey, Varis; Michoski, Craig
Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, R.; Hong, Seungkyu K.; Kwon, Hyoung-Ahn
We used a 3-D regional atmospheric chemistry transport model (WRF-Chem) to examine processes that determine O3 in East Asia; in particular, we focused on O3 dry deposition, which is an uncertain research area due to insufficient observation and numerical studies in East Asia. Here, we compare two widely used dry deposition parameterization schemes, Wesely and M3DRY, which are used in the WRF-Chem and CMAQ models, respectively. The O3 dry deposition velocities simulated using the two aforementioned schemes under identical meteorological conditions show considerable differences (a factor of 2) due to surface resistance parameterization discrepancies. The O3 concentration differed by upmore » to 10 ppbv for the monthly mean. The simulated and observed dry deposition velocities were compared, which showed that the Wesely scheme model is consistent with the observations and successfully reproduces the observed diurnal variation. We conduct several sensitivity simulations by changing the land use data, the surface resistance of the water and the model’s spatial resolution to examine the factors that affect O3 concentrations in East Asia. As shown, the model was considerably sensitive to the input parameters, which indicates a high uncertainty for such O3 dry deposition simulations. Observations are necessary to constrain the dry deposition parameterization and input data to improve the East Asia air quality models.« less
How uncertain is model-based prediction of copper loads in stormwater runoff?
Lindblom, E; Ahlman, S; Mikkelsen, P S
2007-01-01
In this paper, we conduct a systematic analysis of the uncertainty related with estimating the total load of pollution (copper) from a separate stormwater drainage system, conditioned on a specific combination of input data, a dynamic conceptual pollutant accumulation-washout model and measurements (runoff volumes and pollutant masses). We use the generalized likelihood uncertainty estimation (GLUE) methodology and generate posterior parameter distributions that result in model outputs encompassing a significant number of the highly variable measurements. Given the applied pollution accumulation-washout model and a total of 57 measurements during one month, the total predicted copper masses can be predicted within a range of +/-50% of the median value. The message is that this relatively large uncertainty should be acknowledged in connection with posting statements about micropollutant loads as estimated from dynamic models, even when calibrated with on-site concentration data.
NASA Astrophysics Data System (ADS)
Bates, Matthew E.; Keisler, Jeffrey M.; Zussblatt, Niels P.; Plourde, Kenton J.; Wender, Ben A.; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis—methods commonly applied in financial and operations management—to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios—combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
Bates, Matthew E; Keisler, Jeffrey M; Zussblatt, Niels P; Plourde, Kenton J; Wender, Ben A; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis-methods commonly applied in financial and operations management-to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios-combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
NASA Astrophysics Data System (ADS)
Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.
2015-12-01
One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Rotorcraft control system design for uncertain vehicle dynamics using quantitative feedback theory
NASA Technical Reports Server (NTRS)
Hess, R. A.
1994-01-01
Quantitative Feedback Theory describes a frequency-domain technique for the design of multi-input, multi-output control systems which must meet time or frequency domain performance criteria when specified uncertainty exists in the linear description of the vehicle dynamics. This theory is applied to the design of the longitudinal flight control system for a linear model of the BO-105C rotorcraft. Uncertainty in the vehicle model is due to the variation in the vehicle dynamics over a range of airspeeds from 0-100 kts. For purposes of exposition, the vehicle description contains no rotor or actuator dynamics. The design example indicates the manner in which significant uncertainty exists in the vehicle model. The advantage of using a sequential loop closure technique to reduce the cost of feedback is demonstrated by example.
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
MRAC Revisited: Guaranteed Performance with Reference Model Modification
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmaje
2010-01-01
This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.
Single-axis gyroscopic motion with uncertain angular velocity about spin axis
NASA Technical Reports Server (NTRS)
Singh, S. N.
1977-01-01
A differential game approach is presented for studying the response of a gyro by treating the controlled angular velocity about the input axis as the evader, and the bounded but uncertain angular velocity about the spin axis as the pursuer. When the uncertain angular velocity about the spin axis desires to force the gyro to saturation a differential game problem with two terminal surfaces results, whereas when the evader desires to attain the equilibrium state the usual game with single terminal manifold arises. A barrier, delineating the capture zone (CZ) in which the gyro can attain saturation and the escape zone (EZ) in which the evader avoids saturation is obtained. The CZ is further delineated into two subregions such that the states in each subregion can be forced on a definite target manifold. The application of the game theoretic approach to Control Moment Gyro is briefly discussed.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Optimizing water purchases for an Environmental Water Account
NASA Astrophysics Data System (ADS)
Lund, J. R.; Hollinshead, S. P.
2005-12-01
State and federal agencies in California have established an Environmental Water Account (EWA) to buy water to protect endangered fish in the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary. This paper presents a three-stage probabilistic optimization model that identifies least-cost strategies for purchasing water for the EWA given hydrologic, operational, and biological uncertainties. This approach minimizes the expected cost of long-term, spot, and option water purchases to meet uncertain flow dedications for fish. The model prescribes the location, timing, and type of optimal water purchases and can illustrate how least-cost strategies change with hydrologic, operational, biological, and cost inputs. Details of the optimization model's application to California's EWA are provided with a discussion of its utility for strategic planning and policy purposes. Limitations in and sensitivity analysis of the model's representation of EWA operations are discussed, as are operational and research recommendations.
Neilson, Peter D; Neilson, Megan D
2005-09-01
Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
Diversified models for portfolio selection based on uncertain semivariance
NASA Astrophysics Data System (ADS)
Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini
2017-02-01
Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Liu, Zhengmin; Liu, Peide
2017-04-01
The Bonferroni mean (BM) was originally introduced by Bonferroni and generalised by many other researchers due to its capacity to capture the interrelationship between input arguments. Nevertheless, in many situations, interrelationships do not always exist between all of the attributes. Attributes can be partitioned into several different categories and members of intra-partition are interrelated while no interrelationship exists between attributes of different partitions. In this paper, as complements to the existing generalisations of BM, we investigate the partitioned Bonferroni mean (PBM) under intuitionistic uncertain linguistic environments and develop two linguistic aggregation operators: intuitionistic uncertain linguistic partitioned Bonferroni mean (IULPBM) and its weighted form (WIULPBM). Then, motivated by the ideal of geometric mean and PBM, we further present the partitioned geometric Bonferroni mean (PGBM) and develop two linguistic geometric aggregation operators: intuitionistic uncertain linguistic partitioned geometric Bonferroni mean (IULPGBM) and its weighted form (WIULPGBM). Some properties and special cases of these proposed operators are also investigated and discussed in detail. Based on these operators, an approach for multiple attribute decision-making problems with intuitionistic uncertain linguistic information is developed. Finally, a practical example is presented to illustrate the developed approach and comparison analyses are conducted with other representative methods to verify the effectiveness and feasibility of the developed approach.
Regional and national significance of biological nitrogen fixation by crops in the United States
Background/Questions/Methods Biological nitrogen fixation by crops (C-BNF) represents one of the largest anthropogenic inputs of reactive nitrogen (N) to land surfaces around the world. In the United States (US), existing estimates of C-BNF are uncertain because of incomplete o...
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347
Uncertain programming models for portfolio selection with uncertain returns
NASA Astrophysics Data System (ADS)
Zhang, Bo; Peng, Jin; Li, Shengguo
2015-10-01
In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Olsson, Oliver; Höper, Heinrich; Hamscher, Gerd; Kümmerer, Klaus
2016-04-01
The simulation of reactive transport in the aquatic environment is hampered by the ambiguity of environmental fate process conceptualizations for a specific substance in the literature. Concepts are usually identified by experimental studies and inverse modelling under controlled lab conditions in order to reduce environmental uncertainties such as uncertain boundary conditions and input data. However, since environmental conditions affect substance behaviour, a re-evaluation might be necessary under environmental conditions which might, in turn, be affected by uncertainties. Using a combination of experimental data and simulations of the leaching behaviour of the veterinary antibiotic Sulfamethazine (SMZ; synonym: sulfadimidine) and the hydrological tracer Bromide (Br) in a field lysimeter, we re-evaluated the sorption concepts of both substances under uncertain field conditions. Sampling data of a field lysimeter experiment in which both substances were applied twice a year with manure and sampled at the bottom of two lysimeters during three subsequent years was used for model set-up and evaluation. The total amount of leached SMZ and Br were 22 μg and 129 mg, respectively. A reactive transport model was parameterized to the conditions of the two lysimeters filled with monoliths (depth 2 m, area 1 m²) of a sandy soil showing a low pH value under which Bromide is sorptive. We used different sorption concepts such as constant and organic-carbon dependent sorption coefficients and instantaneous and kinetic sorption equilibrium. Combining the sorption concepts resulted in four scenarios per substance with different equations for sorption equilibrium and sorption kinetics. The GLUE (Generalized Likelihood Uncertainty Estimation) method was applied to each scenario using parameter ranges found in experimental and modelling studies. The parameter spaces for each scenario were sampled using a Latin Hypercube method which was refined around local model efficiency maxima. Results of the cumulative SMZ leaching simulations suggest a best conceptualization combination of instantaneous sorption to organic carbon which is consistent with the literature. The best Nash-Sutcliffe efficiency (Neff) was 0.96 and the 5th and 95th percentile of the uncertainty estimation were 18 and 27 μg. In contrast, both scenarios of kinetic Br sorption had similar results (Neff =0.99, uncertainty bounds 110-176 mg and 112-176 mg) but were clearly better than instantaneous sorption scenarios. Therefore, only the concept of sorption kinetics could be identified for Br modelling whereas both tested sorption equilibrium coefficient concepts performed equally well. The reasons for this specific case of equifinality may be uncertainties of model input data under field conditions or an insensitivity of the sorption equilibrium method due to relatively low adsorption of Br. Our results show that it may be possible to identify or at least falsify specific sorption concepts under uncertain field conditions using a long-term leaching experiment and modelling methods. Cases of environmental fate concept equifinality arouse the possibility of future model structure uncertainty analysis using an ensemble of models with different environmental fate concepts.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Niemeyer, Frank; Simon, Ulrich; Wehner, Tim
2013-09-06
Numerical models of secondary fracture healing are based on mechanoregulatory algorithms that use distortional strain alone or in combination with either dilatational strain or fluid velocity as determining stimuli for tissue differentiation and development. Comparison of these algorithms has previously suggested that healing processes under torsional rotational loading can only be properly simulated by considering fluid velocity and deviatoric strain as the regulatory stimuli. We hypothesize that sufficient calibration on uncertain input parameters will enhance our existing model, which uses distortional and dilatational strains as determining stimuli, to properly simulate fracture healing under various loading conditions including also torsional rotation. Therefore, we minimized the difference between numerically simulated and experimentally measured courses of interfragmentary movements of two axial compressive cases and two shear load cases (torsional and translational) by varying several input parameter values within their predefined bounds. The calibrated model was then qualitatively evaluated on the ability to predict physiological changes of spatial and temporal tissue distributions, based on respective in vivo data. Finally, we corroborated the model on five additional axial compressive and one asymmetrical bending load case. We conclude that our model, using distortional and dilatational strains as determining stimuli, is able to simulate fracture-healing processes not only under axial compression and torsional rotation but also under translational shear and asymmetrical bending loading conditions.
Chien, Yi-Hsing; Wang, Wei-Yen; Leu, Yih-Guang; Lee, Tsu-Tian
2011-04-01
This paper proposes a novel method of online modeling and control via the Takagi-Sugeno (T-S) fuzzy-neural model for a class of uncertain nonlinear systems with some kinds of outputs. Although studies about adaptive T-S fuzzy-neural controllers have been made on some nonaffine nonlinear systems, little is known about the more complicated uncertain nonlinear systems. Because the nonlinear functions of the systems are uncertain, traditional T-S fuzzy control methods can model and control them only with great difficulty, if at all. Instead of modeling these uncertain functions directly, we propose that a T-S fuzzy-neural model approximates a so-called virtual linearized system (VLS) of the system, which includes modeling errors and external disturbances. We also propose an online identification algorithm for the VLS and put significant emphasis on robust tracking controller design using an adaptive scheme for the uncertain systems. Moreover, the stability of the closed-loop systems is proven by using strictly positive real Lyapunov theory. The proposed overall scheme guarantees that the outputs of the closed-loop systems asymptotically track the desired output trajectories. To illustrate the effectiveness and applicability of the proposed method, simulation results are given in this paper.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Jeznach, L. C.; Park, M. H.; Tobiason, J. E.
2016-12-01
Extreme precipitation events such as tropical storms and hurricanes are by their nature rare, yet have disproportionate and adverse effects on surface water quality. In the context of drinking water reservoirs, common concerns of such events include increased erosion and sediment transport and influx of natural organic matter and nutrients. As part of an effort to model the effects of an extreme precipitation event on water quality at the reservoir intake of a major municipal water system, this study sought to estimate extreme-event watershed responses including streamflow and exports of nutrients and organic matter for use as inputs to a 2-D hydrodynamic and water quality reservoir model. Since extreme-event watershed exports are highly uncertain, we characterized and propagated predictive uncertainty using a quasi-Monte Carlo approach to generate reservoir model inputs. Three storm precipitation depths—corresponding to recurrence intervals of 5, 50, and 100 years—were converted to streamflow in each of 9 tributaries by volumetrically scaling 2 storm hydrographs from the historical record. Rating-curve models for concentratoin, calibrated using 10 years of data for each of 5 constituents, were then used to estimate the parameters of a multivariate lognormal probability model of constituent concentrations, conditional on each scenario's storm date and streamflow. A quasi-random Halton sequence (n = 100) was drawn from the conditional distribution for each event scenario, and used to generate input files to a calibrated CE-QUAL-W2 reservoir model. The resulting simulated concentrations at the reservoir's drinking water intake constitute a low-discrepancy sample from the estimated uncertainty space of extreme-event source water-quality. Limiting factors to the suitability of this approach include poorly constrained relationships between hydrology and constituent concentrations, a high-dimensional space from which to generate inputs, and relatively long run-time for the reservoir model. This approach proved useful in probing a water supply's resilience to extreme events, and to inform management responses, particularly in a region such as the American Northeast where climate change is expected to bring such events with higher frequency and intensity than have occurred in the past.
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
The Impact of Model Uncertainty on Spatial Compensation in Structural Acoustic Control
NASA Technical Reports Server (NTRS)
Clark, Robert L.
2005-01-01
Turbulent boundary layer (TBL) noise is considered a primary contribution to the interior noise present in commercial airliners. There are numerous investigations of interior noise control devoted to aircraft panels; however, practical realization is a potential challenge since physical boundary conditions are uncertain at best. In most prior studies, pinned or clamped boundary conditions were assumed; however, realistic panels likely display a range of boundary conditions between these two limits. Uncertainty in boundary conditions is a challenge for control system designers, both in terms of the compensator implemented and the location of transducers required to achieve the desired control. The impact of model uncertainties, specifically uncertain boundaries, on the selection of transducer locations for structural acoustic control is considered herein. The final goal of this work is the design of an aircraft panel structure that can reduce TBL noise transmission through the use of a completely adaptive, single-input, single-output control system. The feasibility of this goal is demonstrated through the creation of a detailed analytical solution, followed by the implementation of a test model in a transmission loss apparatus. Successfully realizing a control system robust to variations in boundary conditions can lead to the design and implementation of practical adaptive structures that could be used to control the transmission of sound to the interior of aircraft. Results from this research effort indicate it is possible to optimize the design of actuator and sensor location and aperture, minimizing the impact of boundary conditions on the desired structural acoustic control.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Possible world based consistency learning model for clustering and classifying uncertain data.
Liu, Han; Zhang, Xianchao; Zhang, Xiaotong
2018-06-01
Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Effects of the Previous Outcome on Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2014-01-01
This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915
NASA Astrophysics Data System (ADS)
Brey, S. J.; Fischer, E. V.; Pierce, J. R.; Ford, B.; Lassman, W.; Pfister, G.; Volckens, J.; Gan, R.; Magzamen, S.; Barnes, E. A.
2015-12-01
Exposure to wildfire smoke plumes represents an episodic, uncertain, and potentially growing threat to public health in the western United States. The area burned by wildfires in this region has increased over recent decades, and the future of fires within this region is largely unknown. Future fire emissions are intimately linked to future meteorological conditions, which are uncertain due to the variability of climate model outputs and differences between representative concentration pathways (RCP) scenarios. We know that exposure to wildfire smoke is harmful, particularly for vulnerable populations. However the literature on the heath effects of wildfire smoke exposure is thin, particularly when compared to the depth of information we have on the effects of exposure to smoke of anthropogenic origin. We are exploring the relationships between climate, fires, air quality and public health through multiple interdisciplinary collaborations. We will present several examples from these projects including 1) an analysis of the influence of fire on ozone abundances over the United States, and 2) efforts to use a high-resolution weather forecasting model to nail down exposure within specific smoke plumes. We will also highlight how our team works together. This discussion will include examples of the university structure that facilitates our current collaborations, and the lessons we have learned by seeking stakeholder input to make our science more useful.
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Bagherpoor, H M; Salmasi, Farzad R
2015-07-01
In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Uncertainty in simulating wheat yields under climate change
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.
2013-09-01
Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
Echolalic Responses by a Child with Autism to Four Experimental Conditions of Sociolinguistic Input.
ERIC Educational Resources Information Center
Violette, Joseph; Swisher, Linda
1992-01-01
The immediate verbal imitations (IVIs) of a boy (age five) with autism and echolalia were studied, with variables of linguistic familiarity and instructor's style of directiveness being manipulated. The occurrence of IVIs was related to uncertain or informative events, and was significantly greater when lexical stimuli were unknown and presented…
Development System for Flexible Assembly System.
1986-02-01
in( tho .Iho Iacli: that is, the estimated so’arae hows extrene senisitivity t Ihe t’rr ,f ,rii It: input angles in the vacinity of a pole. These...investigating is to prerotate the world frame so that none of the uncertain transformations have nominal angles in the vacinity of a pole. 17 %% ’ ,’f
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
NASA Astrophysics Data System (ADS)
Kumar, Awkash; Patil, Rashmi S.; Dikshit, Anil Kumar; Kumar, Rakesh; Brandt, Jørgen; Hertel, Ole
2016-10-01
The accuracy of the results from an air quality model is governed by the quality of emission and meteorological data inputs in most of the cases. In the present study, two air quality models were applied for inverse modelling to determine the particulate matter emission strengths of urban and regional sources in and around Mumbai in India. The study takes outset in an existing emission inventory for Total Suspended Particulate Matter (TSPM). Since it is known that the available TSPM inventory is uncertain and incomplete, this study will aim for qualifying this inventory through an inverse modelling exercise. For use as input to the air quality models in this study, onsite meteorological data has been generated using the Weather Research Forecasting (WRF) model. The regional background concentration from regional sources is transported in the atmosphere from outside of the study domain. The regional background concentrations of particulate matter were obtained from model calculations with the Danish Eulerian Hemisphere Model (DEHM) for regional sources. The regional background concentrations obtained from DEHM were then used as boundary concentrations in AERMOD calculations of the contribution from local urban sources. The results from the AERMOD calculations were subsequently compared with observed concentrations and emission correction factors obtained by best fit of the model results to the observed concentrations. The study showed that emissions had to be up-scaled by between 14 and 55% in order to fit the observed concentrations; this is of course when assuming that the DEHM model describes the background concentration level of the right magnitude.
Mixed Signal Learning by Spike Correlation Propagation in Feedback Inhibitory Circuits
Hiratani, Naoki; Fukai, Tomoki
2015-01-01
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory. PMID:25910189
Liu, Wei; Huang, Jie
2018-03-01
This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed jointly connected switching networks. Second, by performing a new coordinate and input transformation, we convert our problem into the cooperative global robust stabilization problem of a more complex augmented system via the distributed internal model principle. Third, we solve the stabilization problem by a distributed state feedback control law. Our result is illustrated by the leader-following consensus problem for a group of Van der Pol oscillators.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Laboratory Simulations of Micrometeoroid Ablation
NASA Astrophysics Data System (ADS)
Thomas, Evan Williamson
Each day, several tons of meteoric material enters Earth's atmosphere, the majority of which consist of small dust particles (micrometeoroids) that completely ablate at high altitudes. The dust input has been suggested to play a role in a variety of phenomena including: layers of metal atoms and ions, nucleation of noctilucent clouds, effects on stratospheric aerosols and ozone chemistry, and the fertilization of the ocean with bio-available iron. Furthermore, a correct understanding of the dust input to the Earth provides constraints on inner solar system dust models. Various methods are used to measure the dust input to the Earth including satellite detectors, radar, lidar, rocket-borne detectors, ice core and deep-sea sediment analysis. However, the best way to interpret each of these measurements is uncertain, which leads to large uncertainties in the total dust input. To better understand the ablation process, and thereby reduce uncertainties in micrometeoroid ablation measurements, a facility has been developed to simulate the ablation of micrometeoroids in laboratory conditions. An electrostatic dust accelerator is used to accelerate iron particles to relevant meteoric velocities (10-70 km/s). The particles are then introduced into a chamber pressurized with a target gas, and they partially or completely ablate over a short distance. An array of diagnostics then measure, with timing and spatial resolution, the charge and light that is generated in the ablation process. In this thesis, we present results from the newly developed ablation facility. The ionization coefficient, an important parameter for interpreting meteor radar measurements, is measured for various target gases. Furthermore, experimental ablation measurements are compared to predictions from commonly used ablation models. In light of these measurements, implications to the broader context of meteor ablation are discussed.
Carbon Sequestration by Perennial Energy Crops: Is the Jury Still Out?
Agostini, Francesco; Gregory, Andrew S; Richter, Goetz M
Soil organic carbon (SOC) changes associated with land conversion to energy crops are central to the debate on bioenergy and their potential carbon neutrality. Here, the experimental evidence on SOC under perennial energy crops (PECs) is synthesised to parameterise a whole systems model and to identify uncertainties and knowledge gaps determining PECs being a sink or source of greenhouse gas (GHG). For Miscanthus and willow ( Salix spp.) and their analogues (switchgrass, poplar), we examine carbon (C) allocation to above- and belowground residue inputs, turnover rates and retention in the soil. A meta-analysis showed that studies on dry matter partitioning and C inputs to soils are plentiful, whilst data on turnover are rare and rely on few isotopic C tracer studies. Comprehensive studies on SOC dynamics and GHG emissions under PECs are limited and subsoil processes and C losses through leaching remain unknown. Data showed dynamic changes of gross C inputs and SOC stocks depending on stand age. C inputs and turnover can now be specifically parameterised in whole PEC system models, whilst dependencies on soil texture, moisture and temperature remain empirical. In conclusion, the annual net SOC storage change exceeds the minimum mitigation requirement (0.25 Mg C ha -1 year -1 ) under herbaceous and woody perennials by far (1.14 to 1.88 and 0.63 to 0.72 Mg C ha -1 year -1 , respectively). However, long-term time series of field data are needed to verify sustainable SOC enrichment, as the physical and chemical stabilities of SOC pools remain uncertain, although they are essential in defining the sustainability of C sequestration (half-life >25 years).
NASA Astrophysics Data System (ADS)
Hassan Asemani, Mohammad; Johari Majd, Vahid
2015-12-01
This paper addresses a robust H∞ fuzzy observer-based tracking design problem for uncertain Takagi-Sugeno fuzzy systems with external disturbances. To have a practical observer-based controller, the premise variables of the system are assumed to be not measurable in general, which leads to a more complex design process. The tracker is synthesised based on a fuzzy Lyapunov function approach and non-parallel distributed compensation (non-PDC) scheme. Using the descriptor redundancy approach, the robust stability conditions are derived in the form of strict linear matrix inequalities (LMIs) even in the presence of uncertainties in the system, input, and output matrices simultaneously. Numerical simulations are provided to show the effectiveness of the proposed method.
Transient Stability Assessment of Power Systems With Uncertain Renewable Generation: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villegas Pico, Hugo Nestor; Aliprantis, Dionysios C.; Lin, Xiaojun
2017-08-09
The transient stability of a power system depends heavily on its operational state at the moment of a fault. In systems where the penetration of renewable generation is significant, the dispatch of the conventional fleet of synchronous generators is uncertain at the time of dynamic security analysis. Hence, the assessment of transient stability requires the solution of a system of nonlinear ordinary differential equations with unknown initial conditions and inputs. To this end, we set forth a computational framework that relies on Taylor polynomials, where variables are associated with the level of renewable generation. This paper describes the details ofmore » the method and illustrates its application on a nine-bus test system.« less
Tahoun, A H
2017-01-01
In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Baker, A. R.; Lesworth, T.; Adams, C.; Jickells, T. D.; Ganzeveld, L.
2010-09-01
Atmospheric nitrogen inputs to the ocean are estimated to have increased by up to a factor of three as a result of increased anthropogenic emissions over the last 150 years, with further increases expected in the short- to mid-term at least. Such estimates are largely based on emissions and atmospheric transport modeling, because, apart from a few island sites, there is very little observational data available for atmospheric nitrogen concentrations over the remote ocean. Here we use samples of rainwater and aerosol we obtained during 12 long-transect cruises across the Atlantic Ocean between 50°N and 50°S as the basis for a climatological estimate of nitrogen inputs to the basin. The climatology is for the 5 years 2001-2005, during which almost all of the cruises took place, and includes dry and wet deposition of nitrate and ammonium explicitly, together with a more uncertain estimate of soluble organic nitrogen deposition. Our results indicate that nitrogen inputs into the region were ˜850-1420 Gmol (12-20 Tg) N yr-1, with ˜78-85% of this in the form of wet deposition. Inputs were greater in the Northern Hemisphere and in wet regions, and wet regions had a greater proportion of input via wet deposition. The largest uncertainty in our estimate of dry inputs is associated with variability in deposition velocities, while the largest uncertainty in our wet nitrogen input estimate is due to the limited amount and uneven geographic distribution of observational data. We also estimate a lower limit of dry deposition of phosphate to be ˜0.19 Gmol P yr-1, using data from the same cruises. We compare our results to several recent estimates of N and P deposition to the Atlantic and discuss the likely sources of uncertainty, such as the potential seasonal bias introduced by our sampling, on our climatology.
Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.
Chang, Yeong-Chan
2009-02-01
This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
Zhang, Yajun; Chai, Tianyou; Wang, Hong
2011-11-01
This paper presents a novel nonlinear control strategy for a class of uncertain single-input and single-output discrete-time nonlinear systems with unstable zero-dynamics. The proposed method combines adaptive-network-based fuzzy inference system (ANFIS) with multiple models, where a linear robust controller, an ANFIS-based nonlinear controller and a switching mechanism are integrated using multiple models technique. It has been shown that the linear controller can ensure the boundedness of the input and output signals and the nonlinear controller can improve the dynamic performance of the closed loop system. Moreover, it has also been shown that the use of the switching mechanism can simultaneously guarantee the closed loop stability and improve its performance. As a result, the controller has the following three outstanding features compared with existing control strategies. First, this method relaxes the assumption of commonly-used uniform boundedness on the unmodeled dynamics and thus enhances its applicability. Second, since ANFIS is used to estimate and compensate the effect caused by the unmodeled dynamics, the convergence rate of neural network learning has been increased. Third, a "one-to-one mapping" technique is adapted to guarantee the universal approximation property of ANFIS. The proposed controller is applied to a numerical example and a pulverizing process of an alumina sintering system, respectively, where its effectiveness has been justified.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.
Mazandarani, Mehran; Pariz, Naser
2018-05-01
This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty
NASA Astrophysics Data System (ADS)
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
Chou, Ting-Shuo; Bucci, Liam D.; Krichmar, Jeffrey L.
2015-01-01
Neurorobots enable researchers to study how behaviors are produced by neural mechanisms in an uncertain, noisy, real-world environment. To investigate how the somatosensory system processes noisy, real-world touch inputs, we introduce a neurorobot called CARL-SJR, which has a full-body tactile sensory area. The design of CARL-SJR is such that it encourages people to communicate with it through gentle touch. CARL-SJR provides feedback to users by displaying bright colors on its surface. In the present study, we show that CARL-SJR is capable of learning associations between conditioned stimuli (CS; a color pattern on its surface) and unconditioned stimuli (US; a preferred touch pattern) by applying a spiking neural network (SNN) with neurobiologically inspired plasticity. Specifically, we modeled the primary somatosensory cortex, prefrontal cortex, striatum, and the insular cortex, which is important for hedonic touch, to process noisy data generated directly from CARL-SJR's tactile sensory area. To facilitate learning, we applied dopamine-modulated Spike Timing Dependent Plasticity (STDP) to our simulated prefrontal cortex, striatum, and insular cortex. To cope with noisy, varying inputs, the SNN was tuned to produce traveling waves of activity that carried spatiotemporal information. Despite the noisy tactile sensors, spike trains, and variations in subject hand swipes, the learning was quite robust. Further, insular cortex activities in the incremental pathway of dopaminergic reward system allowed us to control CARL-SJR's preference for touch direction without heavily pre-processed inputs. The emerged behaviors we found in this model match animal's behaviors wherein they prefer touch in particular areas and directions. Thus, the results in this paper could serve as an explanation on the underlying neural mechanisms for developing tactile preferences and hedonic touch. PMID:26257639
NASA Astrophysics Data System (ADS)
Petersen, Ø. W.; Øiseth, O.; Nord, T. S.; Lourens, E.
2018-07-01
Numerical predictions of the dynamic response of complex structures are often uncertain due to uncertainties inherited from the assumed load effects. Inverse methods can estimate the true dynamic response of a structure through system inversion, combining measured acceleration data with a system model. This article presents a case study of full-field dynamic response estimation of a long-span floating bridge: the Bergøysund Bridge in Norway. This bridge is instrumented with a network of 14 triaxial accelerometers. The system model consists of 27 vibration modes with natural frequencies below 2 Hz, obtained from a tuned finite element model that takes the fluid-structure interaction with the surrounding water into account. Two methods, a joint input-state estimation algorithm and a dual Kalman filter, are applied to estimate the full-field response of the bridge. The results demonstrate that the displacements and the accelerations can be estimated at unmeasured locations with reasonable accuracy when the wave loads are the dominant source of excitation.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Adaptive route choice modeling in uncertain traffic networks with real-time information.
DOT National Transportation Integrated Search
2013-03-01
The objective of the research is to study travelers' route choice behavior in uncertain traffic networks : with real-time information. The research is motivated by two observations of the traffic system: 1) : the system is inherently uncertain with r...
Multilateral Telecoordinated Control of Multiple Robots With Uncertain Kinematics.
Zhai, Di-Hua; Xia, Yuanqing
2017-06-06
This paper addresses the telecoordinated control of multiple robots in the simultaneous presence of asymmetric time-varying delays, nonpassive external forces, and uncertain kinematics/dynamics. To achieve the control objective, a neuroadaptive controller with utilizing prescribed performance control and switching control technique is developed, where the basic idea is to employ the concept of motion synchronization in each pair of master-slave robots and among all slave robots. By using the multiple Lyapunov-Krasovskii functionals method, the state-independent input-to-output practical stability of the closed-loop system is established. Compared with the previous approaches, the new design is straightforward and easier to implement and is applicable to a wider area. Simulation results on three pairs of three degrees-of-freedom robots confirm the theoretical findings.
Combining surface reanalysis and remote sensing data for monitoring evapotranspiration
Marshall, M.; Tu, K.; Funk, C.; Michaelsen, J.; Williams, Pat; Williams, C.; Ardö, J.; Marie, B.; Cappelaere, B.; Grandcourt, A.; Nickless, A.; Noubellon, Y.; Scholes, R.; Kutsch, W.
2012-01-01
Climate change is expected to have the greatest impact on the world's poor. In the Sahel, a climatically sensitive region where rain-fed agriculture is the primary livelihood, expected decreases in water supply will increase food insecurity. Studies on climate change and the intensification of the water cycle in sub-Saharan Africa are few. This is due in part to poor calibration of modeled actual evapotranspiration (AET), a key input in continental-scale hydrologic models. In this study, a model driven by dynamic canopy AET was combined with the Global Land Data Assimilation System realization of the NOAH Land Surface Model (GNOAH) wet canopy and soil AET for monitoring purposes in sub-Saharan Africa. The performance of the hybrid model was compared against AET from the GNOAH model and dynamic model using eight eddy flux towers representing major biomes of sub-Saharan Africa. The greatest improvements in model performance are at humid sites with dense vegetation, while performance at semi-arid sites is poor, but better than individual models. The reduction in errors using the hybrid model can be attributed to the integration of a dynamic vegetation component with land surface model estimates, improved model parameterization, and reduction of multiplicative effects of uncertain data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Glutamatergic model psychoses: prediction error, learning, and inference.
Corlett, Philip R; Honey, Garry D; Krystal, John H; Fletcher, Paul C
2011-01-01
Modulating glutamatergic neurotransmission induces alterations in conscious experience that mimic the symptoms of early psychotic illness. We review studies that use intravenous administration of ketamine, focusing on interindividual variability in the profundity of the ketamine experience. We will consider this individual variability within a hypothetical model of brain and cognitive function centered upon learning and inference. Within this model, the brains, neural systems, and even single neurons specify expectations about their inputs and responding to violations of those expectations with new learning that renders future inputs more predictable. We argue that ketamine temporarily deranges this ability by perturbing both the ways in which prior expectations are specified and the ways in which expectancy violations are signaled. We suggest that the former effect is predominantly mediated by NMDA blockade and the latter by augmented and inappropriate feedforward glutamatergic signaling. We suggest that the observed interindividual variability emerges from individual differences in neural circuits that normally underpin the learning and inference processes described. The exact source for that variability is uncertain, although it is likely to arise not only from genetic variation but also from subjects' previous experiences and prior learning. Furthermore, we argue that chronic, unlike acute, NMDA blockade alters the specification of expectancies more profoundly and permanently. Scrutinizing individual differences in the effects of acute and chronic ketamine administration in the context of the Bayesian brain model may generate new insights about the symptoms of psychosis; their underlying cognitive processes and neurocircuitry.
NASA Astrophysics Data System (ADS)
Balasubramanian, S.; Nelson, A. J.; Koloutsou-Vakakis, S.; Lin, J.; Myles, L.; Rood, M. J.
2016-12-01
Biogeochemical models such as DeNitrification DeComposition (DNDC) are used to model greenhouse and other trace gas fluxes (e.g., ammonia (NH3)) from agricultural ecosystems. NH3 is of interest to air quality because it is a precursor to ambient particulate matter. NH3 fluxes from chemical fertilizer application are uncertain due to dependence on local weather and soil properties, and farm nitrogen management practices. DNDC can be advantageously implemented to model the underlying spatial and temporal trends to support air quality modeling. However, such implementation, requires a detailed evaluation of model predictions, and model behavior. This is the first study to assess DNDC predictions of NH3 fluxes to/from the atmosphere, from chemical fertilizer application, during an entire crop growing season, in the United States. Relaxed eddy accumulation (REA) measurements over corn in Central Illinois, in year 2014, were used to evaluate magnitude and trends in modeled NH3 fluxes. DNDC was able to replicate both magnitude and trends in measured NH3 fluxes, with greater accuracy during the initial 33 days after application, when NH3 was mostly emitted to the atmosphere. However, poorer performance was observed when depositional fluxes were measured. Sensitivity analysis using Monte Carlo simulations indicated that modeled NH3 fluxes were most sensitive to input air temperature and precipitation, soil organic carbon, field capacity and pH and fertilizer loading rate, timing, and application depth and tilling date. By constraining these inputs for conditions in Central Illinois, uncertainty in annual NH3 fluxes was estimated to vary from -87% to 61%. Results from this study provides insight to further improve DNDC predictions and inform efforts for upscaling site predictions to regional scale for the development of emission inventories for air quality modeling.
Accounting for indirect land-use change in the life cycle assessment of biofuel supply chains.
Sanchez, Susan Tarka; Woods, Jeremy; Akhurst, Mark; Brander, Matthew; O'Hare, Michael; Dawson, Terence P; Edwards, Robert; Liska, Adam J; Malpas, Rick
2012-06-07
The expansion of land used for crop production causes variable direct and indirect greenhouse gas emissions, and other economic, social and environmental effects. We analyse the use of life cycle analysis (LCA) for estimating the carbon intensity of biofuel production from indirect land-use change (ILUC). Two approaches are critiqued: direct, attributional life cycle analysis and consequential life cycle analysis (CLCA). A proposed hybrid 'combined model' of the two approaches for ILUC analysis relies on first defining the system boundary of the resulting full LCA. Choices are then made as to the modelling methodology (economic equilibrium or cause-effect), data inputs, land area analysis, carbon stock accounting and uncertainty analysis to be included. We conclude that CLCA is applicable for estimating the historic emissions from ILUC, although improvements to the hybrid approach proposed, coupled with regular updating, are required, and uncertainly values must be adequately represented; however, the scope and the depth of the expansion of the system boundaries required for CLCA remain controversial. In addition, robust prediction, monitoring and accounting frameworks for the dynamic and highly uncertain nature of future crop yields and the effectiveness of policies to reduce deforestation and encourage afforestation remain elusive. Finally, establishing compatible and comparable accounting frameworks for ILUC between the USA, the European Union, South East Asia, Africa, Brazil and other major biofuel trading blocs is urgently needed if substantial distortions between these markets, which would reduce its application in policy outcomes, are to be avoided.
Ecosystem carbon storage and flux in upland/peatland watersheds in northern Minnesota. Chapter 9.
David F. Grigal; Peter C. Bates; Randall K. Kolka
2011-01-01
Carbon (C) storage and fluxes (inputs and outputs of C per unit time) are central issues in global change. Spatial patterns of C storage on the landscape, both that in soil and in biomass, are important from an inventory perspective and for understanding the biophysical processes that affect C fluxes. Regional and national estimates of C storage are uncertain because...
An Artificial Bee Colony Algorithm for Uncertain Portfolio Selection
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm. PMID:25089292
An artificial bee colony algorithm for uncertain portfolio selection.
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.
Uncertainty in Simulating Wheat Yields Under Climate Change
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.;
2013-01-01
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.
How important are the descriptions of vegetation in distributed hydrologic models?
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Thober, Stephan; Zink, Matthias; Rakovec, Oldrich; Samaniego, Luis
2016-04-01
The land surface transforms incoming, absorbed radiation into other energy forms and radiation with longer wavelengths. The land surface emits long-wave radiation, stores energy in the soil, the biomass and the air in the boundary layer, and exchanges sensible and latent heat with the atmosphere. The latter, latent heat consists of evaporation from the soil and canopy and transpiration by plants. Plants enhance in this picture the absorption of incoming radiation and decrease the resistance for evaporation of deeper soil water. Transpiration by plants is therefore either energy-limited by low incoming radiation or water-limited by small soil moisture. In the extreme cases, all available energy will be used for evapotranspiration in cold regions and all available water will be used for evapotranspiration in arid regions. Very simple formulations of latent heat, which include plant processes only very indirectly, work well in hydrologic models for these limiting cases. These simple formulations seem to work also surprisingly well in temperate regions. Hydrologic models have, however, considerable problems in semi-arid regions where the vegetation influence on latent heat should be largest. But the models have to deal with much more problems in these regions. For example data scarcity in the Mediterranean leads to very large model uncertainty due to the forcing data. Water supply is also often very regulated in semi-arid regions. Variability in river discharge can hence be largely driven by the anthropogenic influence rather than natural meteorological variations in these regions. Here we will show for Europe the areas and times when the descriptions of plant processes are important for hydrologic models. We will compare differences in model uncertainties that come from 1. different formulations of evapotranspiration, 2. different descriptions of soil-plant interactions, and 3. uncertainty in the model's input data. It can be seen that model uncertainty stemming from uncertain input data is similar or larger in magnitude than the uncertainty coming from the descriptions of the vegetation in the models. Acquisition of better input data should thus go hand in hand with more sophisticated descriptions of the land surface.
Large uncertainty in carbon uptake potential of land-based climate-change mitigation efforts.
Krause, Andreas; Pugh, Thomas A M; Bayer, Anita D; Li, Wei; Leung, Felix; Bondeau, Alberte; Doelman, Jonathan C; Humpenöder, Florian; Anthoni, Peter; Bodirsky, Benjamin L; Ciais, Philippe; Müller, Christoph; Murray-Tortarolo, Guillermo; Olin, Stefan; Popp, Alexander; Sitch, Stephen; Stehfest, Elke; Arneth, Almut
2018-07-01
Most climate mitigation scenarios involve negative emissions, especially those that aim to limit global temperature increase to 2°C or less. However, the carbon uptake potential in land-based climate change mitigation efforts is highly uncertain. Here, we address this uncertainty by using two land-based mitigation scenarios from two land-use models (IMAGE and MAgPIE) as input to four dynamic global vegetation models (DGVMs; LPJ-GUESS, ORCHIDEE, JULES, LPJmL). Each of the four combinations of land-use models and mitigation scenarios aimed for a cumulative carbon uptake of ~130 GtC by the end of the century, achieved either via the cultivation of bioenergy crops combined with carbon capture and storage (BECCS) or avoided deforestation and afforestation (ADAFF). Results suggest large uncertainty in simulated future land demand and carbon uptake rates, depending on the assumptions related to land use and land management in the models. Total cumulative carbon uptake in the DGVMs is highly variable across mitigation scenarios, ranging between 19 and 130 GtC by year 2099. Only one out of the 16 combinations of mitigation scenarios and DGVMs achieves an equivalent or higher carbon uptake than achieved in the land-use models. The large differences in carbon uptake between the DGVMs and their discrepancy against the carbon uptake in IMAGE and MAgPIE are mainly due to different model assumptions regarding bioenergy crop yields and due to the simulation of soil carbon response to land-use change. Differences between land-use models and DGVMs regarding forest biomass and the rate of forest regrowth also have an impact, albeit smaller, on the results. Given the low confidence in simulated carbon uptake for a given land-based mitigation scenario, and that negative emissions simulated by the DGVMs are typically lower than assumed in scenarios consistent with the 2°C target, relying on negative emissions to mitigate climate change is a highly uncertain strategy. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
NASA Astrophysics Data System (ADS)
Matichuk, R.; Tonnesen, G.; Luecken, D.; Roselle, S. J.; Napelenok, S. L.; Baker, K. R.; Gilliam, R. C.; Misenis, C.; Murphy, B.; Schwede, D. B.
2015-12-01
The western United States is an important source of domestic energy resources. One of the primary environmental impacts associated with oil and natural gas production is related to air emission releases of a number of air pollutants. Some of these pollutants are important precursors to the formation of ground-level ozone. To better understand ozone impacts and other air quality issues, photochemical air quality models are used to simulate the changes in pollutant concentrations in the atmosphere on local, regional, and national spatial scales. These models are important for air quality management because they assist in identifying source contributions to air quality problems and designing effective strategies to reduce harmful air pollutants. The success of predicting oil and natural gas air quality impacts depends on the accuracy of the input information, including emissions inventories, meteorological information, and boundary conditions. The treatment of chemical and physical processes within these models is equally important. However, given the limited amount of data collected for oil and natural gas production emissions in the past and the complex terrain and meteorological conditions in western states, the ability of these models to accurately predict pollution concentrations from these sources is uncertain. Therefore, this presentation will focus on understanding the Community Multiscale Air Quality (CMAQ) model's ability to predict air quality impacts associated with oil and natural gas production and its sensitivity to input uncertainties. The results will focus on winter ozone issues in the Uinta Basin, Utah and identify the factors contributing to model performance issues. The results of this study will help support future air quality model development, policy and regulatory decisions for the oil and gas sector.
A Web-Based Modelling Platform for Interactive Exploration of Regional Responses to Global Change
NASA Astrophysics Data System (ADS)
Holman, I.
2014-12-01
Climate change adaptation is a complex human-environmental problem that is framed by the uncertainty in impacts and the adaptation choices available, but is also bounded by real-world constraints such as future resource availability and environmental and institutional capacities. Educating the next generation of informed decision-makers that will be able to make knowledgeable responses to global climate change impacts requires them to have access to information that is credible, accurate, easy to understand, and appropriate. However, available resources are too often produced by inaccessible models for scenario simulations chosen by researchers hindering exploration and enquiry. This paper describes the interactive exploratory web-based CLIMSAVE Integrated Assessment (IA) Platform (www.climsave.eu/iap) that aims to democratise climate change impacts, adaptation and vulnerability modelling. The regional version of the Platform contain linked simulation models (of the urban, agriculture, forestry, water and biodiversity sectors), probabilistic climate scenarios and socio-economic scenarios, that enable users to select their inputs (climate and socioeconomic), rapidly run the models using their input variable settings and view their chosen outputs. The interface of the CLIMSAVE IA Platform is designed to facilitate a two-way iterative process of dialogue and exploration of "what if's" to enable a wide range of users to improve their understanding surrounding impacts, adaptation responses and vulnerability of natural resources and ecosystem services under uncertain futures. This paper will describe the evolution of the Platform and demonstrate how using its holistic framework (multi sector / ecosystem service; cross-sectoral, climate and socio-economic change) will help to assist learning around the challenging concepts of responding to global change.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Otitis Media with Effusion: Its Significance in the Deaf Student.
1982-06-01
Otitis media with effusion currently ranks as the most common cause of hearing loss in children of preschool and school age. Otitis media with...makes the difference between usable auditory input and useless noise. The etiology of otitis media with effusion is uncertain. Its educational...paper explores the extent of otitis media with effusion, its effects, what methods are available for detection, current and future methods of medical
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Dickinson, Robert E.
2012-06-01
This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.
Optimal second order sliding mode control for linear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-11-01
In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.
Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem
2015-03-01
This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Modelling uncertain paternity to address differential pedigree accuracy
USDA-ARS?s Scientific Manuscript database
The objective was to implement uncertain parentage models to account for differences in daughter pedigree accuracy. Elite sires have nearly all daughters genotyped resulting in correct paternity assignment. Bulls of lesser genetic merit have fewer daughters genotyped creating the possibility for mor...
Big bang nucleosynthesis revisited via Trojan Horse method measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pizzone, R. G.; Spartá, R.; Spitaleri, C.
Nuclear reaction rates are among the most important input for understanding primordial nucleosynthesis and, therefore, for a quantitative description of the early universe. An up-to-date compilation of direct cross-sections of {sup 2}H(d, p){sup 3}H, {sup 2}H(d, n){sup 3}He, {sup 7}Li(p, α){sup 4}He, and {sup 3}He(d, p){sup 4}He reactions is given. These are among the most uncertain cross-sections used and input for big bang nucleosynthesis calculations. Their measurements through the Trojan Horse method are also reviewed and compared with direct data. The reaction rates and the corresponding recommended errors in this work were used as input for primordial nucleosynthesis calculations tomore » evaluate their impact on the {sup 2}H, {sup 3,4}He, and {sup 7}Li primordial abundances, which are then compared with observations.« less
Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.
Chen, Ziting; Li, Zhijun; Chen, C L Philip
2017-06-01
An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bellos, V.; Mahmoodian, M.; Leopold, U.; Torres-Matallana, J. A.; Schutz, G.; Clemens, F.
2017-12-01
Surrogate models help to decrease the run-time of computationally expensive, detailed models. Recent studies show that Gaussian Process Emulators (GPE) are promising techniques in the field of urban drainage modelling. However, this study focusses on developing a GPE-based surrogate model for later application in Real Time Control (RTC) using input and output time series of a complex simulator. The case study is an urban drainage catchment in Luxembourg. A detailed simulator, implemented in InfoWorks ICM, is used to generate 120 input-output ensembles, from which, 100 are used for training the emulator and 20 for validation of the results. An ensemble of historical rainfall events with 2 hours duration and 10 minutes time steps are considered as the input data. Two example outputs, are selected as wastewater volume and total COD concentration in a storage tank in the network. The results of the emulator are tested with unseen random rainfall events from the ensemble dataset. The emulator is approximately 1000 times faster than the original simulator for this small case study. Whereas the overall patterns of the simulator are matched by the emulator, in some cases the emulator deviates from the simulator. To quantify the accuracy of the emulator in comparison with the original simulator, Nash-Sutcliffe efficiency (NSE) between the emulator and simulator is calculated for unseen rainfall scenarios. The range of NSE for the case of tank volume is from 0.88 to 0.99 with a mean value of 0.95, whereas for COD is from 0.71 to 0.99 with a mean value of 0.92. The emulator is able to predict the tank volume with higher accuracy as the relationship between rainfall intensity and tank volume is linear. For COD, which has a non-linear behaviour, the predictions are less accurate and more uncertain, in particular when rainfall intensity increases. This predictions were improved by including a larger amount of training data for the higher rainfall intensities. It was observed that, the accuracy of the emulator predictions depends on the ensemble training dataset design and the amount of data fed. Finally, more investigation is required to test the possibility of applying this type of fast emulators for model-based RTC applications in which limited number of inputs and outputs are considered in a short prediction horizon.
NASA Langley's Approach to the Sandia's Structural Dynamics Challenge Problem
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kenny, Sean P.; Crespo, Luis G.; Elliott, Kenny B.
2007-01-01
The objective of this challenge is to develop a data-based probabilistic model of uncertainty to predict the behavior of subsystems (payloads) by themselves and while coupled to a primary (target) system. Although this type of analysis is routinely performed and representative of issues faced in real-world system design and integration, there are still several key technical challenges that must be addressed when analyzing uncertain interconnected systems. For example, one key technical challenge is related to the fact that there is limited data on target configurations. Moreover, it is typical to have multiple data sets from experiments conducted at the subsystem level, but often samples sizes are not sufficient to compute high confidence statistics. In this challenge problem additional constraints are placed as ground rules for the participants. One such rule is that mathematical models of the subsystem are limited to linear approximations of the nonlinear physics of the problem at hand. Also, participants are constrained to use these models and the multiple data sets to make predictions about the target system response under completely different input conditions. Our approach involved initially the screening of several different methods. Three of the ones considered are presented herein. The first one is based on the transformation of the modal data to an orthogonal space where the mean and covariance of the data are matched by the model. The other two approaches worked solutions in physical space where the uncertain parameter set is made of masses, stiffnesses and damping coefficients; one matches confidence intervals of low order moments of the statistics via optimization while the second one uses a Kernel density estimation approach. The paper will touch on all the approaches, lessons learned, validation 1 metrics and their comparison, data quantity restriction, and assumptions/limitations of each approach. Keywords: Probabilistic modeling, model validation, uncertainty quantification, kernel density
NASA Astrophysics Data System (ADS)
Yang, G.; Lin, Y.; Bhattacharya, P.
2007-12-01
To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.
The hydraulic capacity of deteriorating sewer systems.
Pollert, J; Ugarelli, R; Saegrov, S; Schilling, W; Di Federico, V
2005-01-01
Sewer and wastewater systems suffer from insufficient capacity, construction flaws and pipe deterioration. Consequences are structural failures, local floods, surface erosion and pollution of receiving waters bodies. European cities spend in the order of five billion Euro per year for wastewater network rehabilitation. This amount is estimated to increase due to network ageing. The project CARE-S (Computer Aided RE-habilitation of Sewer Networks) deals with sewer and storm water networks. The final project goal is to develop integrated software, which provides the most cost-efficient system of maintenance, repair and rehabilitation of sewer networks. Decisions on investments in rehabilitation often have to be made with uncertain information about the structural condition and the hydraulic performance of a sewer system. Because of this, decision-making involves considerable risks. This paper presents the results of research focused on the study of hydraulic effects caused by failures due to temporal decline of sewer systems. Hydraulic simulations are usually carried out by running commercial models that apply, as input, default values of parameters that strongly influence results. Using CCTV inspections information as dataset to catalogue principal types of failures affecting pipes, a 3D model was used to evaluate their hydraulic consequences. The translation of failures effects in parameters values producing the same hydraulic conditions caused by failures was carried out through the comparison of laboratory experiences and 3D simulations results. Those parameters could be the input of 1D commercial models instead of the default values commonly inserted.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Bellman Continuum (3rd) International Workshop (13-14 June 1988)
1988-06-01
Modelling Uncertain Problem ................. 53 David Bensoussan ,---,>Asymptotic Linearization of Uncertain Multivariable Systems by Sliding Modes...K. Ghosh .-. Robust Model Tracking for a Class of Singularly Perturbed Nonlinear Systems via Composite Control ....... 93 F. Garofalo and L. Glielmo...MODELISATION ET COMMANDE EN ECONOMIE MODELS AND CONTROL POLICIES IN ECONOMICS Qualitative Differential Games : A Viability Approach ............. 117
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.G.; Eubanks, L.
1998-03-01
This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robiinson, David G.
1999-02-20
This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less
Robust on-off pulse control of flexible space vehicles
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi
1993-01-01
The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.
Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin
2017-09-01
In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.
Modeling Vegetation Growth Impact on Groundwater Recharge
NASA Astrophysics Data System (ADS)
Anurag, H.; Ng, G. H. C.; Tipping, R.
2017-12-01
Vegetation growth is affected by variability in climate and land-cover / land-use over a range of temporal and spatial scales. Vegetation also modifies water budget through interception and evapotranspiration and thus has a significant impact on groundwater recharge. Most groundwater recharge assessments represent vegetation using specified, static parameter, such as for leaf-area-index, but this neglects the effect of vegetation dynamics on recharge estimates. Our study addresses this gap by including vegetation growth in model simulations of recharge. We use NCAR's Community Land Model v4.5 with its BGC module (BGC is the new CLM4.5 biogeochemistry). It integrates prognostic vegetation growth with land-surface and subsurface hydrological processes and can thus capture the effect of vegetation on groundwater. A challenge, however, is the need to resolve uncertainties in model inputs ranging from vegetation growth parameters all the way down to the water table. We have compiled diverse data spanning meteorological inputs to subsurface geology and use these to implement ensemble model simulations to evaluate the possible effects of dynamic vegetation growth (versus specified, static vegetation parameterizations) on estimating groundwater recharge. We present preliminary results for select data-intensive test locations throughout the state of Minnesota (USA), which has a sharp east-west precipitation gradient that makes it an apt testbed for examining ecohydrologic relationships across different temperate climatic settings and ecosystems. Using the ensemble simulations, we examine the effect of seasonal to interannual variability of vegetation growth on recharge and water table depths, which has implications for predicting the combined impact of climate, vegetation, and geology on groundwater resources. Future work will include distributed model simulations over the entire state, as well as conditioning uncertain vegetation and subsurface parameters on remote sensing data and statewide water table records using data assimilation.
Wind turbine model and loop shaping controller design
NASA Astrophysics Data System (ADS)
Gilev, Bogdan
2017-12-01
A model of a wind turbine is evaluated, consisting of: wind speed model, mechanical and electrical model of generator and tower oscillation model. Model of the whole system is linearized around of a nominal point. By using the linear model with uncertainties is synthesized a uncertain model. By using the uncertain model is developed a H∞ controller, which provide mode of stabilizing the rotor frequency and damping the tower oscillations. Finally is simulated work of nonlinear system and H∞ controller.
Thekdi, Shital A; Santos, Joost R
2016-05-01
Disruptive events such as natural disasters, loss or reduction of resources, work stoppages, and emergent conditions have potential to propagate economic losses across trade networks. In particular, disruptions to the operation of container port activity can be detrimental for international trade and commerce. Risk assessment should anticipate the impact of port operation disruptions with consideration of how priorities change due to uncertain scenarios and guide investments that are effective and feasible for implementation. Priorities for protective measures and continuity of operations planning must consider the economic impact of such disruptions across a variety of scenarios. This article introduces new performance metrics to characterize resiliency in interdependency modeling and also integrates scenario-based methods to measure economic sensitivity to sudden-onset disruptions. The methods will be demonstrated on a U.S. port responsible for handling $36.1 billion of cargo annually. The methods will be useful to port management, private industry supply chain planning, and transportation infrastructure management. © 2015 Society for Risk Analysis.
Deduction of reservoir operating rules for application in global hydrological models
NASA Astrophysics Data System (ADS)
Coerver, Hubertus M.; Rutten, Martine M.; van de Giesen, Nick C.
2018-01-01
A big challenge in constructing global hydrological models is the inclusion of anthropogenic impacts on the water cycle, such as caused by dams. Dam operators make decisions based on experience and often uncertain information. In this study information generally available to dam operators, like inflow into the reservoir and storage levels, was used to derive fuzzy rules describing the way a reservoir is operated. Using an artificial neural network capable of mimicking fuzzy logic, called the ANFIS adaptive-network-based fuzzy inference system, fuzzy rules linking inflow and storage with reservoir release were determined for 11 reservoirs in central Asia, the US and Vietnam. By varying the input variables of the neural network, different configurations of fuzzy rules were created and tested. It was found that the release from relatively large reservoirs was significantly dependent on information concerning recent storage levels, while release from smaller reservoirs was more dependent on reservoir inflows. Subsequently, the derived rules were used to simulate reservoir release with an average Nash-Sutcliffe coefficient of 0.81.
NASA Astrophysics Data System (ADS)
Sprofera, Joseph D.; Clark, Robert L.; Cabell, Randolph H.; Gibbs, Gary P.
2005-05-01
Turbulent boundary layer (TBL) noise is considered a primary contribution to the interior noise present in commercial airliners. There are numerous investigations of interior noise control devoted to aircraft panels; however, practical realization is a potential challenge since physical boundary conditions are uncertain at best. In most prior studies, pinned or clamped boundary conditions were assumed; however, realistic panels likely display a range of boundary conditions between these two limits. Uncertainty in boundary conditions is a challenge for control system designers, both in terms of the compensator implemented and the location of transducers required to achieve the desired control. The impact of model uncertainties, specifically uncertain boundaries, on the selection of transducer locations for structural acoustic control is considered herein. The final goal of this work is the design of an aircraft panel structure that can reduce TBL noise transmission through the use of a completely adaptive, single-input, single-output control system. The feasibility of this goal is demonstrated through the creation of a detailed analytical solution, followed by the implementation of a test model in a transmission loss apparatus. Successfully realizing a control system robust to variations in boundary conditions can lead to the design and implementation of practical adaptive structures that could be used to control the transmission of sound to the interior of aircraft. Results from this research effort indicate it is possible to optimize the design of actuator and sensor location and aperture, minimizing the impact of boundary conditions on the desired structural acoustic control.
An Exemplar-Model Account of Feature Inference from Uncertain Categorizations
ERIC Educational Resources Information Center
Nosofsky, Robert M.
2015-01-01
In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…
An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery
NASA Technical Reports Server (NTRS)
Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)
2000-01-01
The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.
Water and solute mass balance of five small, relatively undisturbed watersheds in the U.S.
Peters, N.E.; Shanley, J.B.; Aulenbach, Brent T.; Webb, R.M.; Campbell, D.H.; Hunt, R.; Larsen, M.C.; Stallard, R.F.; Troester, J.; Walker, J.F.
2006-01-01
Geochemical mass balances were computed for water years 1992-1997 (October 1991 through September 1997) for the five watersheds of the U.S. Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) Program to determine the primary regional controls on yields of the major dissolved inorganic solutes. The sites, which vary markedly with respect to climate, geology, physiography, and ecology, are: Allequash Creek, Wisconsin (low-relief, humid continental forest); Andrews Creek, Colorado (cold alpine, taiga/tundra, and subalpine boreal forest); Ri??o Icacos, Puerto Rico (lower montane, wet tropical forest); Panola Mountain, Georgia (humid subtropical piedmont forest); and Sleepers River, Vermont (humid northern hardwood forest). Streamwater output fluxes were determined by constructing empirical multivariate concentration models including discharge and seasonal components. Input fluxes were computed from weekly wet-only or bulk precipitation sampling. Despite uncertainties in input fluxes arising from poorly defined elevation gradients, lack of dry-deposition and occult-deposition measurements, and uncertain sea-salt contributions, the following was concluded: (1) for solutes derived primarily from rock weathering (Ca, Mg, Na, K, and H4SiO4), net fluxes (outputs in streamflow minus inputs in deposition) varied by two orders of magnitude, which is attributed to a large gradient in rock weathering rates controlled by climate and geologic parent material; (2) the net flux of atmospherically derived solutes (NH4, NO3, SO4, and Cl) was similar among sites, with SO4 being the most variable and NH4 and NO3 generally retained (except for NO 3 at Andrews); and (3) relations among monthly solute fluxes and differences among solute concentration model parameters yielded additional insights into comparative biogeochemical processes at the sites. ?? 2005 Elsevier B.V. All rights reserved.
Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2016-11-01
Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
Zhang, Qin; Yao, Quanying
2018-05-01
The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.
Robust synergetic control design under inputs and states constraints
NASA Astrophysics Data System (ADS)
Rastegar, Saeid; Araújo, Rui; Sadati, Jalil
2018-03-01
In this paper, a novel robust-constrained control methodology for discrete-time linear parameter-varying (DT-LPV) systems is proposed based on a synergetic control theory (SCT) approach. It is shown that in DT-LPV systems without uncertainty, and for any unmeasured bounded additive disturbance, the proposed controller accomplishes the goal of stabilising the system by asymptotically driving the error of the controlled variable to a bounded set containing the origin and then maintaining it there. Moreover, given an uncertain DT-LPV system jointly subject to unmeasured and constrained additive disturbances, and constraints in states, input commands and reference signals (set points), then invariant set theory is used to find an appropriate polyhedral robust invariant region in which the proposed control framework is guaranteed to robustly stabilise the closed-loop system. Furthermore, this is achieved even for the case of varying non-zero control set points in such uncertain DT-LPV systems. The controller is characterised to have a simple structure leading to an easy implementation, and a non-complex design process. The effectiveness of the proposed method and the implications of the controller design on feasibility and closed-loop performance are demonstrated through application examples on the temperature control on a continuous-stirred tank reactor plant, on the control of a real-coupled DC motor plant, and on an open-loop unstable system example.
Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan
2017-11-01
A neural network (NN) adaptive control design problem is addressed for a class of uncertain multi-input-multi-output (MIMO) nonlinear systems in block-triangular form. The considered systems contain uncertainty dynamics and their states are enforced to subject to bounded constraints as well as the couplings among various inputs and outputs are inserted in each subsystem. To stabilize this class of systems, a novel adaptive control strategy is constructively framed by using the backstepping design technique and NNs. The novel integral barrier Lyapunov functionals (BLFs) are employed to overcome the violation of the full state constraints. The proposed strategy can not only guarantee the boundedness of the closed-loop system and the outputs are driven to follow the reference signals, but also can ensure all the states to remain in the predefined compact sets. Moreover, the transformed constraints on the errors are used in the previous BLF, and accordingly it is required to determine clearly the bounds of the virtual controllers. Thus, it can relax the conservative limitations in the traditional BLF-based controls for the full state constraints. This conservatism can be solved in this paper and it is for the first time to control this class of MIMO systems with the full state constraints. The performance of the proposed control strategy can be verified through a simulation example.
An imprecise probability approach for squeal instability analysis based on evidence theory
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-01-01
An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.
NASA Astrophysics Data System (ADS)
Flint, A. L.; Flint, L. E.
2010-12-01
The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.
2012-02-01
use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
Uncertainty in BMP evaluation and optimization for watershed management
NASA Astrophysics Data System (ADS)
Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.
2012-12-01
Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Doyon, Nicolas; Prescott, Steven A.; Castonguay, Annie; Godin, Antoine G.; Kröger, Helmut; De Koninck, Yves
2011-01-01
Chloride homeostasis is a critical determinant of the strength and robustness of inhibition mediated by GABAA receptors (GABAARs). The impact of changes in steady state Cl− gradient is relatively straightforward to understand, but how dynamic interplay between Cl− influx, diffusion, extrusion and interaction with other ion species affects synaptic signaling remains uncertain. Here we used electrodiffusion modeling to investigate the nonlinear interactions between these processes. Results demonstrate that diffusion is crucial for redistributing intracellular Cl− load on a fast time scale, whereas Cl−extrusion controls steady state levels. Interaction between diffusion and extrusion can result in a somato-dendritic Cl− gradient even when KCC2 is distributed uniformly across the cell. Reducing KCC2 activity led to decreased efficacy of GABAAR-mediated inhibition, but increasing GABAAR input failed to fully compensate for this form of disinhibition because of activity-dependent accumulation of Cl−. Furthermore, if spiking persisted despite the presence of GABAAR input, Cl− accumulation became accelerated because of the large Cl− driving force that occurs during spikes. The resulting positive feedback loop caused catastrophic failure of inhibition. Simulations also revealed other feedback loops, such as competition between Cl− and pH regulation. Several model predictions were tested and confirmed by [Cl−]i imaging experiments. Our study has thus uncovered how Cl− regulation depends on a multiplicity of dynamically interacting mechanisms. Furthermore, the model revealed that enhancing KCC2 activity beyond normal levels did not negatively impact firing frequency or cause overt extracellular K− accumulation, demonstrating that enhancing KCC2 activity is a valid strategy for therapeutic intervention. PMID:21931544
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
History matching through dynamic decision-making
Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson
2017-01-01
History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.
Pecevski, Dejan; Maass, Wolfgang
2016-01-01
Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123
Pecevski, Dejan
2016-01-01
Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214
Niu, Ben; Li, Lu
2018-06-01
This brief proposes a new neural-network (NN)-based adaptive output tracking control scheme for a class of disturbed multiple-input multiple-output uncertain nonlinear switched systems with input delays. By combining the universal approximation ability of radial basis function NNs and adaptive backstepping recursive design with an improved multiple Lyapunov function (MLF) scheme, a novel adaptive neural output tracking controller design method is presented for the switched system. The feature of the developed design is that different coordinate transformations are adopted to overcome the conservativeness caused by adopting a common coordinate transformation for all subsystems. It is shown that all the variables of the resulting closed-loop system are semiglobally uniformly ultimately bounded under a class of switching signals in the presence of MLF and that the system output can follow the desired reference signal. To demonstrate the practicability of the obtained result, an adaptive neural output tracking controller is designed for a mass-spring-damper system.
Soil warming response: field experiments to Earth system models
NASA Astrophysics Data System (ADS)
Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.
2017-12-01
The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.
NASA Technical Reports Server (NTRS)
Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her
1990-01-01
Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Elshall, A. S.; Hanor, J. S.
2012-12-01
Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis provided insights on uncertainty priorities and propagation. In addition, it assisted in evaluating the relative importance of competing modeling propositions for each uncertain model component. By being able to dissect the uncertain model components and provide weighted representation of the competing propositions for each uncertain model component based on the background knowledge, the HBMA functions as an epistemic framework for advancing knowledge about the system under study.
Li, Yongming; Tong, Shaocheng
2017-12-01
In this paper, an adaptive fuzzy output constrained control design approach is addressed for multi-input multioutput uncertain stochastic nonlinear systems in nonstrict-feedback form. The nonlinear systems addressed in this paper possess unstructured uncertainties, unknown gain functions and unknown stochastic disturbances. Fuzzy logic systems are utilized to tackle the problem of unknown nonlinear uncertainties. The barrier Lyapunov function technique is employed to solve the output constrained problem. In the framework of backstepping design, an adaptive fuzzy control design scheme is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
Application of dynamic uncertain causality graph in spacecraft fault diagnosis: Logic cycle
NASA Astrophysics Data System (ADS)
Yao, Quanying; Zhang, Qin; Liu, Peng; Yang, Ping; Zhu, Ma; Wang, Xiaochen
2017-04-01
Intelligent diagnosis system are applied to fault diagnosis in spacecraft. Dynamic Uncertain Causality Graph (DUCG) is a new probability graphic model with many advantages. In the knowledge expression of spacecraft fault diagnosis, feedback among variables is frequently encountered, which may cause directed cyclic graphs (DCGs). Probabilistic graphical models (PGMs) such as bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning, but BN does not allow DCGs. In this paper, DUGG is applied to fault diagnosis in spacecraft: introducing the inference algorithm for the DUCG to deal with feedback. Now, DUCG has been tested in 16 typical faults with 100% diagnosis accuracy.
Statistical Performances of Resistive Active Power Splitter
NASA Astrophysics Data System (ADS)
Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul
2016-03-01
In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.
Plant, Nathaniel G.
2016-01-01
Predictions of coastal evolution driven by episodic and persistent processes associated with storms and relative sea-level rise (SLR) are required to test our understanding, evaluate our predictive capability, and to provide guidance for coastal management decisions. Previous work demonstrated that the spatial variability of long-term shoreline change can be predicted using observed SLR rates, tide range, wave height, coastal slope, and a characterization of the geomorphic setting. The shoreline is not suf- ficient to indicate which processes are important in causing shoreline change, such as overwash that depends on coastal dune elevations. Predicting dune height is intrinsically important to assess future storm vulnerability. Here, we enhance shoreline-change predictions by including dune height as a vari- able in a statistical modeling approach. Dune height can also be used as an input variable, but it does not improve the shoreline-change prediction skill. Dune-height input does help to reduce prediction uncer- tainty. That is, by including dune height, the prediction is more precise but not more accurate. Comparing hindcast evaluations, better predictive skill was found when predicting dune height (0.8) compared with shoreline change (0.6). The skill depends on the level of detail of the model and we identify an optimized model that has high skill and minimal overfitting. The predictive model can be implemented with a range of forecast scenarios, and we illustrate the impacts of a higher future sea-level. This scenario shows that the shoreline change becomes increasingly erosional and more uncertain. Predicted dune heights are lower and the dune height uncertainty decreases.
Analysis of Implicit Uncertain Systems. Part 1: Theoretical Framework
1994-12-07
Analysis of Implicit Uncertain Systems Part I: Theoretical Framework Fernando Paganini * John Doyle 1 December 7, 1994 Abst rac t This paper...Analysis of Implicit Uncertain Systems Part I: Theoretical Framework 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...model and a number of constraints relevant to the analysis problem under consideration. In Part I of this paper we propose a theoretical framework which
Research of Uncertainty Reasoning in Pineapple Disease Identification System
NASA Astrophysics Data System (ADS)
Liu, Liqun; Fan, Haifeng
In order to deal with the uncertainty of evidences mostly existing in pineapple disease identification system, a reasoning model based on evidence credibility factor was established. The uncertainty reasoning method is discussed,including: uncertain representation of knowledge, uncertain representation of rules, uncertain representation of multi-evidences and update of reasoning rules. The reasoning can fully reflect the uncertainty in disease identification and reduce the influence of subjective factors on the accuracy of the system.
Risk indicator for agricultural inputs of trace elements to Canadian soils.
Sheppard, S C; Grant, C A; Sheppard, M I; de Jong, R; Long, J
2009-01-01
Trace elements (TEs) are universally present in environmental media, including soil, but agriculture uses some materials that have increased TE concentrations. Some TEs (e.g., Cu, Se, and Zn) are added to animal feeds to ensure animal health. Similarly, TEs are present in micronutrient fertilizers. In the case of phosphate fertilizers, some TEs (e.g., Cd) may be inadvertently elevated because of the source rock used in the manufacturing. The key question for agriculture is "After decades of use, could these TE additions result in the deterioration of soil quality?" An early warning would allow the development of best management practices to slow or reverse this trend. This paper discusses a model that estimates future TE concentrations for the 2780 land area polygons composing essentially all of the agricultural land in Canada. The development of the model is discussed, as are various metrics to express the risk related to TE accumulation. The elements As, Cd, Cu, Pb, Se, and Zn are considered, with inputs from the atmosphere, fertilizers, manures, and municipal biosolids. In many cases, steady-state concentrations could be toxic, but steady state is far in the future. In 100 yr, the soil concentrations (Century soil concentrations) are estimated to be up to threefold higher than present background, an impact even if not a problematic impact. The geographic distribution reflects agricultural intensity. Contributions from micronutrient fertilizers are perhaps the most uncertain due to the limited data available on their use.
Estimates of galactic cosmic ray shielding requirements during solar minimum
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.
1990-01-01
Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Tsallis’ non-extensive free energy as a subjective value of an uncertain reward
NASA Astrophysics Data System (ADS)
Takahashi, Taiki
2009-03-01
Recent studies in neuroeconomics and econophysics revealed the importance of reward expectation in decision under uncertainty. Behavioral neuroeconomic studies have proposed that the unpredictability and the probability of an uncertain reward are distinctly encoded as entropy and a distorted probability weight, respectively, in the separate neural systems. However, previous behavioral economic and decision-theoretic models could not quantify reward-seeking and uncertainty aversion in a theoretically consistent manner. In this paper, we have: (i) proposed that generalized Helmholtz free energy in Tsallis’ non-extensive thermostatistics can be utilized to quantify a perceived value of an uncertain reward, and (ii) empirically examined the explanatory powers of the models. Future study directions in neuroeconomics and econophysics by utilizing the Tsallis’ free energy model are discussed.
Uncertain relational reasoning in the parietal cortex.
Ragni, Marco; Franzmeier, Imke; Maier, Simon; Knauff, Markus
2016-04-01
The psychology of reasoning is currently transitioning from the study of deductive inferences under certainty to inferences that have degrees of uncertainty in both their premises and conclusions; however, only a few studies have explored the cortical basis of uncertain reasoning. Using transcranial magnetic stimulation (TMS), we show that areas in the right superior parietal lobe (rSPL) are necessary for solving spatial relational reasoning problems under conditions of uncertainty. Twenty-four participants had to decide whether a single presented order of objects agreed with a given set of indeterminate premises that could be interpreted in more than one way. During the presentation of the order, 10-Hz TMS was applied over the rSPL or a sham control site. Right SPL TMS during the inference phase disrupted performance in uncertain relational reasoning. Moreover, we found differences in the error rates between preferred mental models, alternative models, and inconsistent models. Our results suggest that different mechanisms are involved when people reason spatially and evaluate different kinds of uncertain conclusions. Copyright © 2016 Elsevier Inc. All rights reserved.
Developing and applying metamodels of high resolution ...
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or statistical relationships between a model’s input and output variables for model analysis, interpretation, or information consumption by users’ clients; (2) quantifying a model’s sensitivity to alternative or uncertain forcing functions, initial conditions, or parameters; and (3) characterizing the model’s response or state space. Using five existing models developed by US Environmental Protection Agency, we generate a metamodeling database of the expected environmental and biological concentrations of 644 organic chemicals released into nine US rivers from wastewater treatment works (WTWs) assuming multiple loading rates and sizes of populations serviced. The chemicals of interest have log n-octanol/water partition coefficients ( ) ranging from 3 to 14, and the rivers of concern have mean annual discharges ranging from 1.09 to 3240 m3/s. Log linear regression models are derived to predict mean annual dissolved and total water concentrations and total sediment concentrations of chemicals of concern based on their , Henry’s Law Constant, and WTW loading rate and on the mean annual discharges of the receiving rivers. Metamodels are also derived to predict mean annual chemical
Screening level risk assessment model for chemical fate and effects in the environment.
Arnot, Jon A; Mackay, Don; Webster, Eva; Southwood, Jeanette M
2006-04-01
A screening level risk assessment model is developed and described to assess and prioritize chemicals by estimating environmental fate and transport, bioaccumulation, and exposure to humans and wildlife for a unit emission rate. The most sensitive risk endpoint is identified and a critical emission rate is then calculated as a result of that endpoint being reached. Finally, this estimated critical emission rate is compared with the estimated actual emission rate as a risk assessment factor. This "back-tracking" process avoids the use of highly uncertain emission rate data as model input. The application of the model is demonstrated in detail for three diverse chemicals and in less detail for a group of 70 chemicals drawn from the Canadian Domestic Substances List. The simple Level II and the more complex Level III fate calculations are used to "bin" substances into categories of similar probable risk. The essential role of the model is to synthesize information on chemical and environmental properties within a consistent mass balance framework to yield an overall estimate of screening level risk with respect to the defined endpoint. The approach may be useful to identify and prioritize those chemicals of commerce that are of greatest potential concern and require more comprehensive modeling and monitoring evaluations in actual regional environments and food webs.
a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation
NASA Astrophysics Data System (ADS)
Nadi, S.; Houshyaripour, A. H.
2017-09-01
This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.
Accounting for indirect land-use change in the life cycle assessment of biofuel supply chains
Sanchez, Susan Tarka; Woods, Jeremy; Akhurst, Mark; Brander, Matthew; O'Hare, Michael; Dawson, Terence P.; Edwards, Robert; Liska, Adam J.; Malpas, Rick
2012-01-01
The expansion of land used for crop production causes variable direct and indirect greenhouse gas emissions, and other economic, social and environmental effects. We analyse the use of life cycle analysis (LCA) for estimating the carbon intensity of biofuel production from indirect land-use change (ILUC). Two approaches are critiqued: direct, attributional life cycle analysis and consequential life cycle analysis (CLCA). A proposed hybrid ‘combined model’ of the two approaches for ILUC analysis relies on first defining the system boundary of the resulting full LCA. Choices are then made as to the modelling methodology (economic equilibrium or cause–effect), data inputs, land area analysis, carbon stock accounting and uncertainty analysis to be included. We conclude that CLCA is applicable for estimating the historic emissions from ILUC, although improvements to the hybrid approach proposed, coupled with regular updating, are required, and uncertainly values must be adequately represented; however, the scope and the depth of the expansion of the system boundaries required for CLCA remain controversial. In addition, robust prediction, monitoring and accounting frameworks for the dynamic and highly uncertain nature of future crop yields and the effectiveness of policies to reduce deforestation and encourage afforestation remain elusive. Finally, establishing compatible and comparable accounting frameworks for ILUC between the USA, the European Union, South East Asia, Africa, Brazil and other major biofuel trading blocs is urgently needed if substantial distortions between these markets, which would reduce its application in policy outcomes, are to be avoided. PMID:22467143
Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment
NASA Astrophysics Data System (ADS)
Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.
2015-01-01
In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Statistical Approaches for Spatiotemporal Prediction of Low Flows
NASA Astrophysics Data System (ADS)
Fangmann, A.; Haberlandt, U.
2017-12-01
An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.
NASA Astrophysics Data System (ADS)
Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo
2016-04-01
Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed flood extent data and measured water levels from a 2007 summer flood event on the river Severn. The area of interest is a 7 km reach in which the river passes through the city of Worcester, a low water slope, subcritical reach in which backwater effects are significant. For this domain, the catchment area between flow gauging stations extends over 540 km2. Four hydrological models from the FUSE framework (Framework for Understanding Structural Errors) were set up to simulate the rainfall-runoff process over this area. At this regional scale, a 2-dimensional hydraulic model that solves the local inertial approximation of the shallow water equations was applied to route the flow, whereas the full form of these equations was solved at the local scale to predict the urban flow field. This nested approach hence allows an examination of water fluxes from the catchment to the building scale, while requiring short setup and computational times. An accurate prediction of the magnitude and timing of the flood peak was obtained with the proposed method, in spite of the unusual structure of the rain episode and the complexity of the River Severn system. The findings highlight the importance of estimating boundary condition uncertainty and local rainfall contribution for accurate prediction of river flows and inundation.
Astronomical pacing of the global silica cycle recorded in Mesozoic bedded cherts.
Ikeda, Masayuki; Tada, Ryuji; Ozaki, Kazumi
2017-06-07
The global silica cycle is an important component of the long-term climate system, yet its controlling factors are largely uncertain due to poorly constrained proxy records. Here we present a ∼70 Myr-long record of early Mesozoic biogenic silica (BSi) flux from radiolarian chert in Japan. Average low-mid-latitude BSi burial flux in the superocean Panthalassa is ∼90% of that of the modern global ocean and relative amplitude varied by ∼20-50% over the 100 kyr to 30 Myr orbital cycles during the early Mesozoic. We hypothesize that BSi in chert was a major sink for oceanic dissolved silica (DSi), with fluctuations proportional to DSi input from chemical weathering on timescales longer than the residence time of DSi (<∼100 Kyr). Chemical weathering rates estimated by the GEOCARBSULFvolc model support these hypotheses, excluding the volcanism-driven oceanic anoxic events of the Early-Middle Triassic and Toarcian that exceed model limits. We propose that the Mega monsoon of the supercontinent Pangea nonlinearly amplified the orbitally paced chemical weathering that drove BSi burial during the early Mesozoic greenhouse world.
Astronomical pacing of the global silica cycle recorded in Mesozoic bedded cherts
NASA Astrophysics Data System (ADS)
Ikeda, Masayuki; Tada, Ryuji; Ozaki, Kazumi
2017-06-01
The global silica cycle is an important component of the long-term climate system, yet its controlling factors are largely uncertain due to poorly constrained proxy records. Here we present a ~70 Myr-long record of early Mesozoic biogenic silica (BSi) flux from radiolarian chert in Japan. Average low-mid-latitude BSi burial flux in the superocean Panthalassa is ~90% of that of the modern global ocean and relative amplitude varied by ~20-50% over the 100 kyr to 30 Myr orbital cycles during the early Mesozoic. We hypothesize that BSi in chert was a major sink for oceanic dissolved silica (DSi), with fluctuations proportional to DSi input from chemical weathering on timescales longer than the residence time of DSi (<~100 Kyr). Chemical weathering rates estimated by the GEOCARBSULFvolc model support these hypotheses, excluding the volcanism-driven oceanic anoxic events of the Early-Middle Triassic and Toarcian that exceed model limits. We propose that the Mega monsoon of the supercontinent Pangea nonlinearly amplified the orbitally paced chemical weathering that drove BSi burial during the early Mesozoic greenhouse world.
Hartman, J.S.; Weisberg, P.J.; Pillai, R.; Ericksen, J.A.; Kuiken, T.; Lindberg, S.E.; Zhang, H.; Rytuba, J.J.; Gustin, M.S.
2009-01-01
Ecosystems that have low mercury (Hg) concentrations (i.e., not enriched or impactedbygeologic or anthropogenic processes) cover most of the terrestrial surface area of the earth yet their role as a net source or sink for atmospheric Hg is uncertain. Here we use empirical data to develop a rule-based model implemented within a geographic information system framework to estimate the spatial and temporal patterns of Hg flux for semiarid deserts, grasslands, and deciduous forests representing 45% of the continental United States. This exercise provides an indication of whether these ecosystems are a net source or sink for atmospheric Hg as well as a basis for recommendation of data to collect in future field sampling campaigns. Results indicated that soil alone was a small net source of atmospheric Hg and that emitted Hg could be accounted for based on Hg input by wet deposition. When foliar assimilation and wet deposition are added to the area estimate of soil Hg flux these biomes are a sink for atmospheric Hg. ?? 2009 American Chemical Society.
On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza
2015-12-01
In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.
Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.
2013-12-01
Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Synchronization between uncertain nonidentical networks with quantum chaotic behavior
NASA Astrophysics Data System (ADS)
Li, Wenlin; Li, Chong; Song, Heshan
2016-11-01
Synchronization between uncertain nonidentical networks with quantum chaotic behavior is researched. The identification laws of unknown parameters in state equations of network nodes, the adaptive laws of configuration matrix elements and outer coupling strengths are determined based on Lyapunov theorem. The conditions of realizing synchronization between uncertain nonidentical networks are discussed and obtained. Further, Jaynes-Cummings model in physics are taken as the nodes of two networks and simulation results show that the synchronization performance between networks is very stable.
Robust interval-based regulation for anaerobic digestion processes.
Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C
2005-01-01
A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.
M, Malahias; H, Gardner; S, Hindocha; A, Juma; Khan, W
2012-01-01
Rheumatoid arthritis is a systemic autoimmune disease of uncertain aetiology, which is characterized primarily by synovial inflammation with secondary skeletal destructions. Rheumatoid Arthritis is diagnosed by the presence of four of the seven diagnostic criteria, defined by The American College of Rheumatology. Approximately half a million adults in the United Kingdom suffer from rheumatoid arthritis with an age prevalence between the second and fourth decades of life; annually approximately 20,000 new cases are diagnosed. The management of Rheumatoid Arthritis is complex; in the initial phase of the disease it primarily depends on pharmacological management. With disease progression, surgical input to correct deformity comes to play an increasingly important role. The treatment of this condition is also intimately coupled with input from both the occupational therapists and physiotherapy. PMID:22423304
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
NASA Astrophysics Data System (ADS)
Li, Keqiang; Gao, Feng; Li, Shengbo Eben; Zheng, Yang; Gao, Hongbo
2017-12-01
This study presents a distributed H-infinity control method for uncertain platoons with dimensionally and structurally unknown interaction topologies provided that the associated topological eigenvalues are bounded by a predesigned range.With an inverse model to compensate for nonlinear powertrain dynamics, vehicles in a platoon are modeled by third-order uncertain systems with bounded disturbances. On the basis of the eigenvalue decomposition of topological matrices, we convert the platoon system to a norm-bounded uncertain part and a diagonally structured certain part by applying linear transformation. We then use a common Lyapunov method to design a distributed H-infinity controller. Numerically, two linear matrix inequalities corresponding to the minimum and maximum eigenvalues should be solved. The resulting controller can tolerate interaction topologies with eigenvalues located in a certain range. The proposed method can also ensure robustness performance and disturbance attenuation ability for the closed-loop platoon system. Hardware-in-the-loop tests are performed to validate the effectiveness of our method.
A probabilistic approach to emissions from transportation sector in the coming decades
NASA Astrophysics Data System (ADS)
Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.
2010-12-01
Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.
Emission Data For Climate-Chemistry Interactions
NASA Astrophysics Data System (ADS)
Smith, S. J.
2012-12-01
Data on anthropogenic and natural emissions of reactive species are a critical input for studies of atmospheric chemistry and climate. The availability and characteristics of anthropogenic emissions data that can be used for such studies are reviewed and pathways for future work discuss Global and regional datasets for historical and future emissions are available, but their characteristics and applicability for specific studies differ. For the first time, a coordinated set of historical emissions (Lamarque et al 2010) and the future projections (van Vuurren et al. 2011) have been developed for use in the CMIP5 and ACCMIP long-term simulation comparison projects. These data have decadal resolution and were designed for long-term, global simulations. These data, however, lack finer-scale spatial and temporal detail that might be needed for some studies. Robust and timely updates of emissions data is generally lacking, although recent updates will be presented. While historical emission data is often treated as known, emissions are uncertain, even though this uncertainty is rarely quantified. Uncertainty varies by species and location. Inverse modeling is starting to indicate where emission data may be uncertain, which opens the way to improving these data overall. Further interaction between the chemistry modeling and inventory development communities are needed. Future projections are intrinsically uncertain, and while institutions and processes are in place to develop and review long-term century-scale scenarios, a need has remained for a wider range in shorter-term (e.g., several decade) projections. Emissions and scenario development communities have been working to fill this need. Communication across disciplines of the assumptions embedded in emissions projections remains a challenge. Atmospheric chemistry models are a central tool needed for studying chemistry-climate interactions. Simpler models, however, are also needed in order to examine interactions between different physical systems and also between the physical and human systems. Statistical models of system responses are particularly needed both to parameterize interactions in models that cannot simulate particular processes directly, and also to represent uncertainty. Coordinated model experiments are necessary to provide the information needed to develop these representations (i.e. Wild et al 2011). Lamarque, J. F, et al. (2010) Historical (1850-2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols: methodology and application. Atmospheric Chemistry and Physics 10 pp. 7017-7039. doi:10.5194/acp-10-7017-2010 Van Vuuren, D, JA Edmonds, M Kainuma, K Riahi, AM Thomson, KA Hibbard, G Hurtt, T Kram, V Krey, JF Lamarque, matsui, M Meinhausen, N Nakicenovic, SJ Smith, and SK Rose. 2011. "The Representative Concentration Pathways: An Overview." Climatic Change 109 (1-2) 5-31. doi: 10.1007/s10584-011-0148-z. Wild, O., et al. (2012) Modelling future changes in surface ozone: A parameterized approach. Atmos. Chem. Phys., 12, 2037-2054, doi:10.5194/acp-12-2037-2012.
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only extreme Z/X values yielded envelope helium abundance outside this range. We found that other current uncertainties, namely, in the solar age and luminosity, in nuclear rates other than the pp reaction, in the low-temperature molecular opacities, and in the low-density equation of state, have no significant effect on the quantities that can be inferred from helioseismic observations. The predicted pre-main-sequence lithium depletion is uncertain by a factor of 2. The predicted neutrino capture rate is uncertain by approximately 30% for the Cl-27 experiment and by approximately 3% for Ga-71 experiments, while the B-8 neutrino flux is uncertain by approximately 30%.
Interpreting null results from measurements with uncertain correlations: an info-gap approach.
Ben-Haim, Yakov
2011-01-01
Null events—not detecting a pernicious agent—are the basis for declaring the agent is absent. Repeated nulls strengthen confidence in the declaration. However, correlations between observations are difficult to assess in many situations and introduce uncertainty in interpreting repeated nulls. We quantify uncertain correlations using an info-gap model, which is an unbounded family of nested sets of possible probabilities. An info-gap model is nonprobabilistic and entails no assumption about a worst case. We then evaluate the robustness, to uncertain correlations, of estimates of the probability of a null event. This is then the basis for evaluating a nonprobabilistic robustness-based confidence interval for the probability of a null. © 2010 Society for Risk Analysis.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Agriculture-driven deforestation in the tropics from 1990-2015: emissions, trends and uncertainties
NASA Astrophysics Data System (ADS)
Carter, Sarah; Herold, Martin; Avitabile, Valerio; de Bruin, Sytze; De Sy, Veronique; Kooistra, Lammert; Rufino, Mariana C.
2018-01-01
Limited data exists on emissions from agriculture-driven deforestation, and available data are typically uncertain. In this paper, we provide comparable estimates of emissions from both all deforestation and agriculture-driven deforestation, with uncertainties for 91 countries across the tropics between 1990 and 2015. Uncertainties associated with input datasets (activity data and emissions factors) were used to combine the datasets, where most certain datasets contribute the most. This method utilizes all the input data, while minimizing the uncertainty of the emissions estimate. The uncertainty of input datasets was influenced by the quality of the data, the sample size (for sample-based datasets), and the extent to which the timeframe of the data matches the period of interest. Area of deforestation, and the agriculture-driver factor (extent to which agriculture drives deforestation), were the most uncertain components of the emissions estimates, thus improvement in the uncertainties related to these estimates will provide the greatest reductions in uncertainties of emissions estimates. Over the period of the study, Latin America had the highest proportion of deforestation driven by agriculture (78%), and Africa had the lowest (62%). Latin America had the highest emissions from agriculture-driven deforestation, and these peaked at 974 ± 148 Mt CO2 yr-1 in 2000-2005. Africa saw a continuous increase in emissions between 1990 and 2015 (from 154 ± 21-412 ± 75 Mt CO2 yr-1), so mitigation initiatives could be prioritized there. Uncertainties for emissions from agriculture-driven deforestation are ± 62.4% (average over 1990-2015), and uncertainties were highest in Asia and lowest in Latin America. Uncertainty information is crucial for transparency when reporting, and gives credibility to related mitigation initiatives. We demonstrate that uncertainty data can also be useful when combining multiple open datasets, so we recommend new data providers to include this information.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Models and mechanisms of anxiety: evidence from startle studies
Grillon, Christian
2009-01-01
Rationale Preclinical data indicates that threat stimuli elicit two classes of defensive behaviors, those that are associated with imminent danger and are characterized by avoidance or fight (fear), and those that are associated with temporally uncertain danger and are characterized by sustained apprehension and hypervigilance (anxiety). Objective To 1) review evidence for a distinction between fear and anxiety in animal and human experimental models using the startle reflex as an operational measure of aversive states, 2) describe experimental models of anxiety, as opposed to fear, in humans, 3) examine the relevance of these models to clinical anxiety. Results The distinction between phasic fear to imminent threat and sustained anxiety to temporally uncertain danger is suggested by psychopharmacological and behavioral evidence from ethological studies and can be traced back to distinct neuroanatomical systems, the amygdala and the bed nucleus of the stria terminalis. Experimental models of anxiety, not fear, are relevant to non-phobic anxiety disorders. Conclusions Progress in our understanding of normal and abnormal anxiety is critically dependent on our ability to model sustained aversive states to temporally uncertain threat. PMID:18058089
Robust autoassociative memory with coupled networks of Kuramoto-type oscillators
NASA Astrophysics Data System (ADS)
Heger, Daniel; Krischer, Katharina
2016-08-01
Uncertain recognition success, unfavorable scaling of connection complexity, or dependence on complex external input impair the usefulness of current oscillatory neural networks for pattern recognition or restrict technical realizations to small networks. We propose a network architecture of coupled oscillators for pattern recognition which shows none of the mentioned flaws. Furthermore we illustrate the recognition process with simulation results and analyze the dynamics analytically: Possible output patterns are isolated attractors of the system. Additionally, simple criteria for recognition success are derived from a lower bound on the basins of attraction.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
NASA Astrophysics Data System (ADS)
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
NASA Astrophysics Data System (ADS)
Simpson, M. J.; Pisani, O.; Lin, L.; Lun, O.; Simpson, A.; Lajtha, K.; Nadelhoffer, K. J.
2015-12-01
The long-term fate of soil carbon reserves with global environmental change remains uncertain. Shifts in moisture, altered nutrient cycles, species composition, or rising temperatures may alter the proportions of above and belowground biomass entering soil. However, it is unclear how long-term changes in plant inputs may alter the composition of soil organic matter (SOM) and soil carbon storage. Advanced molecular techniques were used to assess SOM composition in mineral soil horizons (0-10 cm) after 20 years of Detrital Input and Removal Treatment (DIRT) at the Harvard Forest. SOM biomarkers (solvent extraction, base hydrolysis and cupric (II) oxide oxidation) and both solid-state and solution-state nuclear magnetic resonance (NMR) spectroscopy were used to identify changes in SOM composition and stage of degradation. Microbial activity and community composition were assessed using phospholipid fatty acid (PLFA) analysis. Doubling aboveground litter inputs decreased soil carbon content, increased the degradation of labile SOM and enhanced the sequestration of aliphatic compounds in soil. The exclusion of belowground inputs (No roots and No inputs) resulted in a decrease in root-derived components and enhanced the degradation of leaf-derived aliphatic structures (cutin). Cutin-derived SOM has been hypothesized to be recalcitrant but our results show that even this complex biopolymer is susceptible to degradation when inputs entering soil are altered. The PLFA data indicate that changes in soil microbial community structure favored the accelerated processing of specific SOM components with littler manipulation. These results collectively reveal that the quantity and quality of plant litter inputs alters the molecular-level composition of SOM and in some cases, enhances the degradation of recalcitrant SOM. Our study also suggests that increased litterfall is unlikely to enhance soil carbon storage over the long-term in temperate forests.
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Carson, John M., III
2006-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.
Resource Limitation Issues In Real-Time Intelligent Systems
NASA Astrophysics Data System (ADS)
Green, Peter E.
1986-03-01
This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
NASA Astrophysics Data System (ADS)
Wu, Y.; Blodau, C.
2013-08-01
Elevated nitrogen deposition and climate change alter the vegetation communities and carbon (C) and nitrogen (N) cycling in peatlands. To address this issue we developed a new process-oriented biogeochemical model (PEATBOG) for analyzing coupled carbon and nitrogen dynamics in northern peatlands. The model consists of four submodels, which simulate: (1) daily water table depth and depth profiles of soil moisture, temperature and oxygen levels; (2) competition among three plants functional types (PFTs), production and litter production of plants; (3) decomposition of peat; and (4) production, consumption, diffusion and export of dissolved C and N species in soil water. The model is novel in the integration of the C and N cycles, the explicit spatial resolution belowground, the consistent conceptualization of movement of water and solutes, the incorporation of stoichiometric controls on elemental fluxes and a consistent conceptualization of C and N reactivity in vegetation and soil organic matter. The model was evaluated for the Mer Bleue Bog, near Ottawa, Ontario, with regards to simulation of soil moisture and temperature and the most important processes in the C and N cycles. Model sensitivity was tested for nitrogen input, precipitation, and temperature, and the choices of the most uncertain parameters were justified. A simulation of nitrogen deposition over 40 yr demonstrates the advantages of the PEATBOG model in tracking biogeochemical effects and vegetation change in the ecosystem.
NASA Astrophysics Data System (ADS)
Wu, Y.; Blodau, C.
2013-03-01
Elevated nitrogen deposition and climate change alter the vegetation communities and carbon (C) and nitrogen (N) cycling in peatlands. To address this issue we developed a new process-oriented biogeochemical model (PEATBOG) for analyzing coupled carbon and nitrogen dynamics in northern peatlands. The model consists of four submodels, which simulate: (1) daily water table depth and depth profiles of soil moisture, temperature and oxygen levels; (2) competition among three plants functional types (PFTs), production and litter production of plants; (3) decomposition of peat; and (4) production, consumption, diffusion and export of dissolved C and N species in soil water. The model is novel in the integration of the C and N cycles, the explicit spatial resolution belowground, the consistent conceptualization of movement of water and solutes, the incorporation of stoichiometric controls on elemental fluxes and a consistent conceptualization of C and N reactivity in vegetation and soil organic matter. The model was evaluated for the Mer Bleue Bog, near Ottawa, Ontario, with regards to simulation of soil moisture and temperature and the most important processes in the C and N cycles. Model sensitivity was tested for nitrogen input, precipitation, and temperature, and the choices of the most uncertain parameters were justified. A simulation of nitrogen deposition over 40 yr demonstrates the advantages of the PEATBOG model in tracking biogeochemical effects and vegetation change in the ecosystem.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Deriving the expected utility of a predictive model when the utilities are uncertain.
Cooper, Gregory F; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.
Liu, Yan-Jun; Tong, Shaocheng
2015-03-01
In the paper, an adaptive tracking control design is studied for a class of nonlinear discrete-time systems with dead-zone input. The considered systems are of the nonaffine pure-feedback form and the dead-zone input appears nonlinearly in the systems. The contributions of the paper are that: 1) it is for the first time to investigate the control problem for this class of discrete-time systems with dead-zone; 2) there are major difficulties for stabilizing such systems and in order to overcome the difficulties, the systems are transformed into an n-step-ahead predictor but nonaffine function is still existent; and 3) an adaptive compensative term is constructed to compensate for the parameters of the dead-zone. The neural networks are used to approximate the unknown functions in the transformed systems. Based on the Lyapunov theory, it is proven that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded and the tracking error converges to a small neighborhood of zero. Two simulation examples are provided to verify the effectiveness of the control approach in the paper.
NASA Astrophysics Data System (ADS)
White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John
2017-08-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral
in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.
2010-12-01
Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John
2017-01-01
Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
Reasoning in Young Children: Fantasy and Information Retrieval.
ERIC Educational Resources Information Center
Markovits, Henry; And Others
1996-01-01
A model of conditional reasoning predicted that children under 12 would respond correctly to questions of uncertain logical form if premises and context enabled them to access counterexamples from memory, and that children's performance with uncertain logical forms would decrease when empirically true premises are presented in a fantasy context.…
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
NASA Astrophysics Data System (ADS)
Nemirsky, Kristofer Kevin
In this thesis, the history and evolution of rotor aircraft with simulated annealing-based PID application were reviewed and quadcopter dynamics are presented. The dynamics of a quadcopter were then modeled, analyzed, and linearized. A cascaded loop architecture with PID controllers was used to stabilize the plant dynamics, which was improved upon through the application of simulated annealing (SA). A Simulink model was developed to test the controllers and verify the functionality of the proposed control system design. In addition, the data that the Simulink model provided were compared with flight data to present the validity of derived dynamics as a proper mathematical model representing the true dynamics of the quadcopter system. Then, the SA-based global optimization procedure was applied to obtain optimized PID parameters. It was observed that the tuned gains through the SA algorithm produced a better performing PID controller than the original manually tuned one. Next, we investigated the uncertain dynamics of the quadcopter setup. After adding uncertainty to the gyroscopic effects associated with pitch-and-roll rate dynamics, the controllers were shown to be robust against the added uncertainty. A discussion follows to summarize SA-based algorithm PID controller design and performance outcomes. Lastly, future work on SA application on multi-input-multi-output (MIMO) systems is briefly discussed.
Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Estimation for the Linear Model With Uncertain Covariance Matrices
NASA Astrophysics Data System (ADS)
Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat
2014-03-01
We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.
NASA Astrophysics Data System (ADS)
Mishra, H.; Karmakar, S.; Kumar, R.
2016-12-01
Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.
Probabilistic Radiological Performance Assessment Modeling and Uncertainty
NASA Astrophysics Data System (ADS)
Tauxe, J.
2004-12-01
A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A practitioner could, however, start with this model as a GoldSim template and, by adding site specific features and parameter values (distributions), use this model as a starting point for a real model to be used in real decision making.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
Gao, Fangzheng; Yuan, Ye; Wu, Yuqiang
2016-09-01
This paper studies the problem of finite-time stabilization by state feedback for a class of uncertain nonholonomic systems in feedforward-like form subject to inputs saturation. Under the weaker homogeneous condition on systems growth, a saturated finite-time control scheme is developed by exploiting the adding a power integrator method, the homogeneous domination approach and the nested saturation technique. Together with a novel switching control strategy, the designed saturated controller guarantees that the states of closed-loop system are regulated to zero in a finite time without violation of the constraint. As an application of the proposed theoretical results, the problem of saturated finite-time control for vertical wheel on rotating table is solved. Simulation results are given to demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Li, Yuefeng; Wang, Guanglin
2017-07-01
The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.
2016-03-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.
2015-11-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Jamieson, Terra S; Schiff, Sherry L; Taylor, William D
2013-02-01
Gas exchange can be a key component of the dissolved oxygen (DO) mass balance in aquatic ecosystems. Quantification of gas transfer rates is essential for the estimation of DO production and consumption rates, and determination of assimilation capacities of systems receiving organic inputs. Currently, the accurate determination of gas transfer rate is a topic of debate in DO modeling, and there are a wide variety of approaches that have been proposed in the literature. The current study investigates the use of repeated measures of stable isotopes of O₂ and DO and a dynamic dual mass-balance model to quantify gas transfer coefficients (k) in the Grand River, Ontario, Canada. Measurements were conducted over a longitudinal gradient that reflected watershed changes from agricultural to urban. Values of k in the Grand River ranged from 3.6 to 8.6 day⁻¹, over discharges ranging from 5.6 to 22.4 m³ s⁻¹, with one high-flow event of 73.1 m³ s⁻¹. The k values were relatively constant over the range of discharge conditions studied. The range in discharge observed in this study is generally representative of non-storm and summer low-flow events; a greater range in k might be observed under a wider range of hydrologic conditions. Overall, k values obtained with the dual model for the Grand River were found to be lower than predicted by the traditional approaches evaluated, highlighting the importance of determining site-specific values of k. The dual mass balance approach provides a more constrained estimate of k than using DO only, and is applicable to large rivers where other approaches would be difficult to use. The addition of an isotopic mass balance provides for a corroboration of the input parameter estimates between the two balances. Constraining the range of potential input values allows for a direct estimate of k in large, productive systems where other k-estimation approaches may be uncertain or logistically infeasible. Copyright © 2012 Elsevier Ltd. All rights reserved.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
Energy balance for uranium recovery from seawater
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, E.; Lindner, H.
The energy return on investment (EROI) of an energy resource is the ratio of the energy it ultimately produces to the energy used to recover it. EROI is a key viability measure for a new recovery technology, particularly in its early stages of development when financial cost assessment would be premature or highly uncertain. This paper estimates the EROI of uranium recovery from seawater via a braid adsorbent technology. In this paper, the energy cost of obtaining uranium from seawater is assessed by breaking the production chain into three processes: adsorbent production, adsorbent deployment and mooring, and uranium elution andmore » purification. Both direct and embodied energy inputs are considered. Direct energy is the energy used by the processes themselves, while embodied energy is used to fabricate their material, equipment or chemical inputs. If the uranium is used in a once-through fuel cycle, the braid adsorbent technology EROI ranges from 12 to 27, depending on still-uncertain performance and system design parameters. It is highly sensitive to the adsorbent capacity in grams of U captured per kg of adsorbent as well as to potential economies in chemical use. This compares to an EROI of ca. 300 for contemporary terrestrial mining. It is important to note that these figures only consider the mineral extraction step in the fuel cycle. At a reference performance level of 2.76 g U recovered per kg adsorbent immersed, the largest energy consumers are the chemicals used in adsorbent production (63%), anchor chain mooring system fabrication and operations (17%), and unit processes in the adsorbent production step (12%). (authors)« less
NASA Astrophysics Data System (ADS)
Yan, Fuhan; Li, Zhaofeng; Jiang, Yichuan
2016-05-01
The issues of modeling and analyzing diffusion in social networks have been extensively studied in the last few decades. Recently, many studies focus on uncertain diffusion process. The uncertainty of diffusion process means that the diffusion probability is unpredicted because of some complex factors. For instance, the variety of individuals' opinions is an important factor that can cause uncertainty of diffusion probability. In detail, the difference between opinions can influence the diffusion probability, and then the evolution of opinions will cause the uncertainty of diffusion probability. It is known that controlling the diffusion process is important in the context of viral marketing and political propaganda. However, previous methods are hardly feasible to control the uncertain diffusion process of individual opinion. In this paper, we present suitable strategy to control this diffusion process based on the approximate estimation of the uncertain factors. We formulate a model in which the diffusion probability is influenced by the distance between opinions, and briefly discuss the properties of the diffusion model. Then, we present an optimization problem at the background of voting to show how to control this uncertain diffusion process. In detail, it is assumed that each individual can choose one of the two candidates or abstention based on his/her opinion. Then, we present strategy to set suitable initiators and their opinions so that the advantage of one candidate will be maximized at the end of diffusion. The results show that traditional influence maximization algorithms are not applicable to this problem, and our algorithm can achieve expected performance.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
NASA Astrophysics Data System (ADS)
Naseri Kouzehgarani, Asal
2009-12-01
Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
Robust control synthesis for uncertain dynamical systems
NASA Technical Reports Server (NTRS)
Byun, Kuk-Whan; Wie, Bong; Sunkel, John
1989-01-01
This paper presents robust control synthesis techniques for uncertain dynamical systems subject to structured parameter perturbation. Both QFT (quantitative feedback theory) and H-infinity control synthesis techniques are investigated. Although most H-infinity-related control techniques are not concerned with the structured parameter perturbation, a new way of incorporating the parameter uncertainty in the robust H-infinity control design is presented. A generic model of uncertain dynamical systems is used to illustrate the design methodologies investigated in this paper. It is shown that, for a certain noncolocated structural control problem, use of both techniques results in nonminimum phase compensation.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Assessing uncertain human exposure to ambient air pollution using environmental models in the Web
NASA Astrophysics Data System (ADS)
Gerharz, L. E.; Pebesma, E.; Denby, B.
2012-04-01
Ambient air quality can have significant impact on human health by causing respiratory and cardio-vascular diseases. Thereby, the pollutant concentration a person is exposed to can differ considerably between individuals depending on their daily routine and movement patterns. Using a straight forward approach this exposure can be estimated by integration of individual space-time paths and spatio-temporally resolved ambient air quality data. To allow a realistic exposure assessment, it is furthermore important to consider uncertainties due to input and model errors. In this work, we present a generic, web-based approach for estimating individual exposure by integration of uncertain position and air quality information implemented as a web service. Following the Model Web initiative envisioning an infrastructure for deploying, executing and chaining environmental models as services, existing models and data sources for e.g. air quality, can be used to assess exposure. Therefore, the service needs to deal with different formats, resolutions and uncertainty representations provided by model or data services. Potential mismatch can be accounted for by transformation of uncertainties and (dis-)aggregation of data under consideration of changes in the uncertainties using components developed in the UncertWeb project. In UncertWeb, the Model Web vision is extended to an Uncertainty-enabled Model Web, where services can process and communicate uncertainties in the data and models. The propagation of uncertainty to the exposure results is quantified using Monte Carlo simulation by combining different realisations of positions and ambient concentrations. Two case studies were used to evaluate the developed exposure assessment service. In a first study, GPS tracks with a positional uncertainty of a few meters, collected in the urban area of Münster, Germany were used to assess exposure to PM10 (particulate matter smaller 10 µm). Air quality data was provided by an uncertainty-enabled air quality model system which provided realisations of concentrations per hour on a 250 m x 250 m resolved grid over Münster. The second case study uses modelled human trajectories in Rotterdam, The Netherlands. The trajectories were provided as realisations in 15 min resolution per 4 digit postal code from an activity model. Air quality estimates were provided for different pollutants as ensembles by a coupled meteorology and air quality model system on a 1 km x 1 km grid with hourly resolution. Both case studies show the successful application of the service to different resolutions and uncertainty representations.
Shetty, N; Shemko, M; Abbas, A
2004-03-01
The objectives were to study knowledge, attitudes, and practices (KAP) regarding tuberculosis (TB) among Somalian subjects in inner London. We administered structured, fixed response KAP questionnaires to 23 patients (culture proved TB), and two groups of controls: 25 contacts (family members) and 27 lay controls (general Somali immigrant population). Responses were summed on a five-point scale. Most were aware of the infectious nature of TB but uncertain of other risk factors. Many were uncertain about coping with the disease and its effect on lifestyle. Belief in biomedicine for TB was unequivocal with men having a significantly higher belief score than women (p = 0.02); the need to comply with TB medication was unambiguously understood. Somalians interviewed were educated, multilingual, and aware of important health issues. Uncertainties in core TB knowledge need to be addressed with direct educational input, especially in women and recent entrants into the country. Volunteers from the established Somalian community could play a valuable part as links in the community to fight TB.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Wolowacz, Sorrel E; Briggs, Andrew; Belozeroff, Vasily; Clarke, Philip; Doward, Lynda; Goeree, Ron; Lloyd, Andrew; Norman, Richard
Cost-utility models are increasingly used in many countries to establish whether the cost of a new intervention can be justified in terms of health benefits. Health-state utility (HSU) estimates (the preference for a given state of health on a cardinal scale where 0 represents dead and 1 represents full health) are typically among the most important and uncertain data inputs in cost-utility models. Clinical trials represent an important opportunity for the collection of health-utility data. However, trials designed primarily to evaluate efficacy and safety often present challenges to the optimal collection of HSU estimates for economic models. Careful planning is needed to determine which of the HSU estimates may be measured in planned trials; to establish the optimal methodology; and to plan any additional studies needed. This report aimed to provide a framework for researchers to plan the collection of health-utility data in clinical studies to provide high-quality HSU estimates for economic modeling. Recommendations are made for early planning of health-utility data collection within a research and development program; design of health-utility data collection during protocol development for a planned clinical trial; design of prospective and cross-sectional observational studies and alternative study types; and statistical analyses and reporting. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Pan, Wei; Guo, Ying; Jin, Lei; Liao, ShuJie
2017-01-01
With the high accident rate of civil aviation, medical resource inventory becomes more important for emergency management at the airport. Meanwhile, medical products usually are time-sensitive and short lifetime. Moreover, we find that the optimal medical resource inventory depends on multiple factors such as different risk preferences, the material shelf life and so on. Thus, it becomes very complex in a real-life environment. According to this situation, we construct medical resource inventory decision model for emergency preparation at the airport. Our model is formulated in such a way as to simultaneously consider uncertain demand, stochastic occurrence time and different risk preferences. For solving this problem, a new programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the optimal medical resource inventory for emergency preparation with uncertain demand and stochastic occurrence time under considering different risk preferences at the airport. PMID:28931007
Pan, Wei; Guo, Ying; Jin, Lei; Liao, ShuJie
2017-01-01
With the high accident rate of civil aviation, medical resource inventory becomes more important for emergency management at the airport. Meanwhile, medical products usually are time-sensitive and short lifetime. Moreover, we find that the optimal medical resource inventory depends on multiple factors such as different risk preferences, the material shelf life and so on. Thus, it becomes very complex in a real-life environment. According to this situation, we construct medical resource inventory decision model for emergency preparation at the airport. Our model is formulated in such a way as to simultaneously consider uncertain demand, stochastic occurrence time and different risk preferences. For solving this problem, a new programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the optimal medical resource inventory for emergency preparation with uncertain demand and stochastic occurrence time under considering different risk preferences at the airport.
A short circuit in thermohaline circulation: A cause for northern hemisphere glaciation?
Driscoll; Haug
1998-10-16
The cause of Northern Hemisphere glaciation about 3 million years ago remains uncertain. Closing the Panamanian Isthmus increased thermohaline circulation and enhanced moisture supply to high latitudes, but the accompanying heat would have inhibited ice growth. One possible solution is that enhanced moisture transported to Eurasia also enhanced freshwater delivery to the Arctic via Siberian rivers. Freshwater input to the Arctic would facilitate sea ice formation, increase the albedo, and isolate the high heat capacity of the ocean from the atmosphere. It would also act as a negative feedback on the efficiency of the "conveyor belt" heat pump.
A multi-period distribution network design model under demand uncertainty
NASA Astrophysics Data System (ADS)
Tabrizi, Babak H.; Razmi, Jafar
2013-05-01
Supply chain management is taken into account as an inseparable component in satisfying customers' requirements. This paper deals with the distribution network design (DND) problem which is a critical issue in achieving supply chain accomplishments. A capable DND can guarantee the success of the entire network performance. However, there are many factors that can cause fluctuations in input data determining market treatment, with respect to short-term planning, on the one hand. On the other hand, network performance may be threatened by the changes that take place within practicing periods, with respect to long-term planning. Thus, in order to bring both kinds of changes under control, we considered a new multi-period, multi-commodity, multi-source DND problem in circumstances where the network encounters uncertain demands. The fuzzy logic is applied here as an efficient tool for controlling the potential customers' demand risk. The defuzzifying framework leads the practitioners and decision-makers to interact with the solution procedure continuously. The fuzzy model is then validated by a sensitivity analysis test, and a typical problem is solved in order to illustrate the implementation steps. Finally, the formulation is tested by some different-sized problems to show its total performance.
NASA Astrophysics Data System (ADS)
Rougier, Jonty; Cashman, Kathy; Sparks, Stephen
2016-04-01
We have analysed the Large Magnitude Explosive Volcanic Eruptions database (LaMEVE) for volcanoes that classify as stratovolcanoes. A non-parametric statistical approach is used to assess the global recording rate for large (M4+). The approach imposes minimal structure on the shape of the recording rate through time. We find that the recording rates have declined rapidly, going backwards in time. Prior to 1600 they are below 50%, and prior to 1100 they are below 20%. Even in the recent past, e.g. the 1800s, they are likely to be appreciably less than 100%.The assessment for very large (M5+) eruptions is more uncertain, due to the scarcity of events. Having taken under-recording into account the large-eruption rates of stratovolcanoes are modelled exchangeably, in order to derive an informative prior distribution as an input into a subsequent volcano-by-volcano hazard assessment. The statistical model implies that volcano-by-volcano predictions can be grouped by the number of recorded large eruptions. Further, it is possible to combine all volcanoes together into a global large eruption prediction, with an M4+ rate computed from the LaMEVE database of 0.57/yr.
NASA Astrophysics Data System (ADS)
Pan, Wei; Wang, Xianjia; Zhong, Yong-guang; Yu, Lean; Jie, Cao; Ran, Lun; Qiao, Han; Wang, Shouyang; Xu, Xianhao
2012-06-01
Data communication service has an important influence on e-commerce. The key challenge for the users is, ultimately, to select a suitable provider. However, in this article, we do not focus on this aspect but the viewpoint and decision-making of providers for order allocation and pricing policy when orders exceed service capacity. It is a multiple criteria decision-making problem such as profit and cancellation ratio. Meanwhile, we know realistic situations in which much of the input information is uncertain. Thus, it becomes very complex in a real-life environment. In this situation, fuzzy sets theory is the best tool for solving this problem. Our fuzzy model is formulated in such a way as to simultaneously consider the imprecision of information, price sensitive demand, stochastic variables, cancellation fee and the general membership function. For solving the problem, a new fuzzy programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the suitable order set and pricing policy of provider in data communication service with different quality of service (QoS) levels.
Estimating economic losses from earthquakes using an empirical approach
Jaiswal, Kishor; Wald, David J.
2013-01-01
We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.
Astronomical pacing of the global silica cycle recorded in Mesozoic bedded cherts
Ikeda, Masayuki; Tada, Ryuji; Ozaki, Kazumi
2017-01-01
The global silica cycle is an important component of the long-term climate system, yet its controlling factors are largely uncertain due to poorly constrained proxy records. Here we present a ∼70 Myr-long record of early Mesozoic biogenic silica (BSi) flux from radiolarian chert in Japan. Average low-mid-latitude BSi burial flux in the superocean Panthalassa is ∼90% of that of the modern global ocean and relative amplitude varied by ∼20–50% over the 100 kyr to 30 Myr orbital cycles during the early Mesozoic. We hypothesize that BSi in chert was a major sink for oceanic dissolved silica (DSi), with fluctuations proportional to DSi input from chemical weathering on timescales longer than the residence time of DSi (<∼100 Kyr). Chemical weathering rates estimated by the GEOCARBSULFvolc model support these hypotheses, excluding the volcanism-driven oceanic anoxic events of the Early-Middle Triassic and Toarcian that exceed model limits. We propose that the Mega monsoon of the supercontinent Pangea nonlinearly amplified the orbitally paced chemical weathering that drove BSi burial during the early Mesozoic greenhouse world. PMID:28589958
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
Data-Conditioned Distributions of Groundwater Recharge Under Climate Change Scenarios
NASA Astrophysics Data System (ADS)
McLaughlin, D.; Ng, G. C.; Entekhabi, D.; Scanlon, B.
2008-12-01
Groundwater recharge is likely to be impacted by climate change, with changes in precipitation amounts altering moisture availability and changes in temperature affecting evaporative demand. This could have major implications for sustainable aquifer pumping rates and contaminant transport into groundwater reservoirs in the future, thus making predictions of recharge under climate change very important. Unfortunately, in dry environments where groundwater resources are often most critical, low recharge rates are difficult to resolve due to high sensitivity to modeling and input errors. Some recent studies on climate change and groundwater have considered recharge using a suite of general circulation model (GCM) weather predictions, an obvious and key source of uncertainty. This work extends beyond those efforts by also accounting for uncertainty in other land-surface model inputs in a probabilistic manner. Recharge predictions are made using a range of GCM projections for a rain-fed cotton site in the semi-arid Southern High Plains region of Texas. Results showed that model simulations using a range of unconstrained literature-based parameter values produce highly uncertain and often misleading recharge rates. Thus, distributional recharge predictions are found using soil and vegetation parameters conditioned on current unsaturated zone soil moisture and chloride concentration observations; assimilation of observations is carried out with an ensemble importance sampling method. Our findings show that the predicted distribution shapes can differ for the various GCM conditions considered, underscoring the importance of probabilistic analysis over deterministic simulations. The recharge predictions indicate that the temporal distribution (over seasons and rain events) of climate change will be particularly critical for groundwater impacts. Overall, changes in recharge amounts and intensity were often more pronounced than changes in annual precipitation and temperature, thus suggesting high susceptibility of groundwater systems to future climate change. Our approach provides a probabilistic sensitivity analysis of recharge under potential climate changes, which will be critical for future management of water resources.
Oglesby, Mary E; Schmidt, Norman B
2017-07-01
Intolerance of uncertainty (IU) has been proposed as an important transdiagnostic variable within mood- and anxiety-related disorders. The extant literature has suggested that individuals high in IU interpret uncertainty more negatively. Furthermore, theoretical models of IU posit that those elevated in IU may experience an uncertain threat as more anxiety provoking than a certain threat. However, no research to date has experimentally manipulated the certainty of an impending threat while utilizing an in vivo stressor. In the current study, undergraduate participants (N = 79) were randomized to one of two conditions: certain threat (participants were told that later on in the study they would give a 3-minute speech) or uncertain threat (participants were told that later on in the study they would flip a coin to determine whether or not they would give a 3-minute speech). Participants also completed self-report questionnaires measuring their baseline state anxiety, baseline trait IU, and prespeech state anxiety. Results indicated that trait IU was associated with greater state anticipatory anxiety when the prospect of giving a speech was made uncertain (i.e., uncertain condition). Further, findings indicated no significant difference in anticipatory state anxiety among individuals high in IU when comparing an uncertain versus certain threat (i.e., uncertain and certain threat conditions, respectively). Furthermore, results found no significant interaction between condition and trait IU when predicting state anticipatory anxiety. This investigation is the first to test a crucial component of IU theory while utilizing an ecologically valid paradigm. Results of the present study are discussed in terms of theoretical models of IU and directions for future work. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar
2015-04-01
The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).
Robust control of seismically excited cable stayed bridges with MR dampers
NASA Astrophysics Data System (ADS)
YeganehFallah, Arash; Khajeh Ahamd Attari, Nader
2017-03-01
In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.
STEM Educators' Integration of Formative Assessment in Teaching and Lesson Design
NASA Astrophysics Data System (ADS)
Moreno, Kimberly A.
Air-breathing hypersonic vehicles, when fully developed, will offer travel in the atmosphere at unprecendented speeds. Capturing their physical behavior by analytical / numerical models is still a major challenge, still limiting the development of controls technology for such vehicles. To study, in an exploratory manner, active control of air-breathing hypersonic vehicles, an analtical, simplified, model of a generic hypersonic air-breathing vehicle in flight was developed by researchers at the Air Force Research Labs in Dayton, Ohio, along with control laws. Elevator deflection and fuel-to-air ratio were used as inputs. However, that model is very approximate, and the field of hypersonics still faces many unknowns. This thesis contributes to the study of control of air-breating hypersonic vehicles in a number of ways: First, regarding control laws synthesis, optimal gains are chosen for the previously developed control law alongside an alternate control law modified from existing literature by minimizing the Lyapunov function derivative using Monte Carlo simulation. This is followed by analysis of the robustness of the control laws in the face of system parametric uncertainties using Monte Carlo simulations. The resulting statistical distributions of the commanded response are analyzed, and linear regression is used to determine, via sensitivity analysis, which uncertain parameters have the largest impact on the desired outcome.
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
From point-wise stress data to a continuous description of the 3D crustal in situ stress state
NASA Astrophysics Data System (ADS)
Heidbach, O.; Ziegler, M.; Reiter, K.; Hergert, T.
2017-12-01
The in situ stress is a key parameter for the safe and sustainable management of geo-reservoirs or storage of waste and energy in deep geological repositories. It is also an essential initial condition for thermo-hydro-mechanical (THM) models that investigate man-made induced processes e.g. seismicity due to fluid injection/extraction, reservoir depletion or storage of heat producing high-level radioactive waste. Without a reasonable assumption on the initial stress condition it is not possible to assess if a man-made process is pushing the system into a critical state or not. However, modelling the initial 3D stress state on reservoir scale is challenging since data are hardly available before drilling in the area of interest. This is in particular the case for the stress magnitude data which are a prerequisite for a reliable model calibration. Here, we present a multi-stage 3D geomechanical-numerical model approach to estimate for a reservoir-scale volume the 3D in situ stress state. First, we set up a large-scale model which is calibrated by stress data and use the modelled stress field subsequently to calibrate a small-scale model located within the large-scale model. The local model contains a significantly higher resolution representation of the subsurface geometry around boreholes of a projected geothermal power plant. This approach incorporates two models and is an alternative to the required trade-off between resolution, computational cost and calibration data which is inevitable for a single model; an extension to a three-stage approach would be straight forward. We exemplify the two-stage approach for the area around Munich in the German Molasse Basin. The results of the reservoir-scale model are presented in terms of values for slip tendency as a measure for the criticality of fault reactivation. The model results show that variations due to uncertainties in the input data are mainly introduced by the uncertain material properties and missing estimates for the magnitude of the maximum horizontal stress SHmax, needed for a more reliable model calibration. This leads to the conclusion that at this stage the model's reliability depends only on the amount and quality of input data records such as available stress information rather than on the modelling technique itself.
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Flach, G. P.
2012-12-01
The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Echolalic responses by a child with autism to four experimental conditions of sociolinguistic input.
Violette, J; Swisher, L
1992-02-01
Studies of the immediate verbal imitations (IVIs) of subjects with echolalia report that features of linguistic or social input alone affect the number of IVIs elicited. This experimental study of a child with echolalia and autism controlled each of these variables while introducing a systematic change in the other. The subject produced more (p less than .05) IVIs in response to unknown lexical words presented with a high degree of directiveness (Condition D) than in response to three other conditions of stimulus presentation (e.g., unknown lexical words, minimally directive style.) Thus, an interaction between the effects of linguistic and social input was demonstrated. IVIs were produced across all conditions, primarily during first presentations of lexical stimuli. Only the IVIs elicited by first presentations of the lexical stimuli during Condition D differed significantly (p less than .05) from the number of IVIs elicited by first presentations of lexical stimuli in other conditions. These findings viewed together suggest that the occurrence of IVIs was related, at least for this child, to an uncertain or informative event and that this response was significantly greater when the lexical stimuli were unknown and presented in a highly directive style.
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
Processing uncertain RFID data in traceability supply chains.
Xie, Dong; Xiao, Jie; Guo, Guangjun; Jiang, Tong
2014-01-01
Radio Frequency Identification (RFID) is widely used to track and trace objects in traceability supply chains. However, massive uncertain data produced by RFID readers are not effective and efficient to be used in RFID application systems. Following the analysis of key features of RFID objects, this paper proposes a new framework for effectively and efficiently processing uncertain RFID data, and supporting a variety of queries for tracking and tracing RFID objects. We adjust different smoothing windows according to different rates of uncertain data, employ different strategies to process uncertain readings, and distinguish ghost, missing, and incomplete data according to their apparent positions. We propose a comprehensive data model which is suitable for different application scenarios. In addition, a path coding scheme is proposed to significantly compress massive data by aggregating the path sequence, the position, and the time intervals. The scheme is suitable for cyclic or long paths. Moreover, we further propose a processing algorithm for group and independent objects. Experimental evaluations show that our approach is effective and efficient in terms of the compression and traceability queries.
Processing Uncertain RFID Data in Traceability Supply Chains
Xie, Dong; Xiao, Jie
2014-01-01
Radio Frequency Identification (RFID) is widely used to track and trace objects in traceability supply chains. However, massive uncertain data produced by RFID readers are not effective and efficient to be used in RFID application systems. Following the analysis of key features of RFID objects, this paper proposes a new framework for effectively and efficiently processing uncertain RFID data, and supporting a variety of queries for tracking and tracing RFID objects. We adjust different smoothing windows according to different rates of uncertain data, employ different strategies to process uncertain readings, and distinguish ghost, missing, and incomplete data according to their apparent positions. We propose a comprehensive data model which is suitable for different application scenarios. In addition, a path coding scheme is proposed to significantly compress massive data by aggregating the path sequence, the position, and the time intervals. The scheme is suitable for cyclic or long paths. Moreover, we further propose a processing algorithm for group and independent objects. Experimental evaluations show that our approach is effective and efficient in terms of the compression and traceability queries. PMID:24737978
Efficient Portfolios of the Energy Technologies
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2011-09-01
The goal of the research is to apply the methods of Portfolio Theory to a set of technologies instead of to a set of securities on a stock market (as it is the case in the original model). Assets on the stock market are objects that have risk and return, parameters that depend on uncertain factors and thus are uncertain. The returns from the use of technologies also depend on uncertain factors and thus each technology has a certain amount of risk. The simultaneous use of technologies could diversify the risks that are associated with technologies just the same way as diversification works on the stock market.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
"Model age-based" and "copy when uncertain" biases in children's social learning of a novel task.
Wood, Lara A; Harrison, Rachel A; Lucas, Amanda J; McGuigan, Nicola; Burdett, Emily R R; Whiten, Andrew
2016-10-01
Theoretical models of social learning predict that individuals can benefit from using strategies that specify when and whom to copy. Here the interaction of two social learning strategies, model age-based biased copying and copy when uncertain, was investigated. Uncertainty was created via a systematic manipulation of demonstration efficacy (completeness) and efficiency (causal relevance of some actions). The participants, 4- to 6-year-old children (N=140), viewed both an adult model and a child model, each of whom used a different tool on a novel task. They did so in a complete condition, a near-complete condition, a partial demonstration condition, or a no-demonstration condition. Half of the demonstrations in each condition incorporated causally irrelevant actions by the models. Social transmission was assessed by first responses but also through children's continued fidelity, the hallmark of social traditions. Results revealed a bias to copy the child model both on first response and in continued interactions. Demonstration efficacy and efficiency did not affect choice of model at first response but did influence solution exploration across trials, with demonstrations containing causally irrelevant actions decreasing exploration of alternative methods. These results imply that uncertain environments can result in canalized social learning from specific classes of model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Global direct radiative forcing by process-parameterized aerosol optical properties
NASA Astrophysics Data System (ADS)
KirkevâG, Alf; Iversen, Trond
2002-10-01
A parameterization of aerosol optical parameters is developed and implemented in an extended version of the community climate model version 3.2 (CCM3) of the U.S. National Center for Atmospheric Research. Direct radiative forcing (DRF) by monthly averaged calculated concentrations of non-sea-salt sulfate and black carbon (BC) is estimated. Inputs are production-specific BC and sulfate from [2002] and background aerosol size distribution and composition. The scheme interpolates between tabulated values to obtain the aerosol single scattering albedo, asymmetry factor, extinction coefficient, and specific extinction coefficient. The tables are constructed by full calculations of optical properties for an array of aerosol input values, for which size-distributed aerosol properties are estimated from theory for condensation and Brownian coagulation, assumed distribution of cloud-droplet residuals from aqueous phase oxidation, and prescribed properties of the background aerosols. Humidity swelling is estimated from the Köhler equation, and Mie calculations finally yield spectrally resolved aerosol optical parameters for 13 solar bands. The scheme is shown to give excellent agreement with nonparameterized DRF calculations for a wide range of situations. Using IPCC emission scenarios for the years 2000 and 2100, calculations with an atmospheric global cliamte model (AFCM) yield a global net anthropogenic DRF of -0.11 and 0.11 W m-2, respectively, when 90% of BC from biomass burning is assumed anthropogenic. In the 2000 scenario, the individual DRF due to sulfate and BC has separately been estimated to -0.29 and 0.19 W m-2, respectively. Our estimates of DRF by BC per BC mass burden are lower than earlier published estimates. Some sensitivity tests are included to investigate to what extent uncertain assumptions may influence these results.
Uncertain behaviours of integrated circuits improve computational performance.
Yoshimura, Chihiro; Yamaoka, Masanao; Hayashi, Masato; Okuyama, Takuya; Aoki, Hidetaka; Kawarabayashi, Ken-ichi; Mizuno, Hiroyuki
2015-11-20
Improvements to the performance of conventional computers have mainly been achieved through semiconductor scaling; however, scaling is reaching its limitations. Natural phenomena, such as quantum superposition and stochastic resonance, have been introduced into new computing paradigms to improve performance beyond these limitations. Here, we explain that the uncertain behaviours of devices due to semiconductor scaling can improve the performance of computers. We prototyped an integrated circuit by performing a ground-state search of the Ising model. The bit errors of memory cell devices holding the current state of search occur probabilistically by inserting fluctuations into dynamic device characteristics, which will be actualised in the future to the chip. As a result, we observed more improvements in solution accuracy than that without fluctuations. Although the uncertain behaviours of devices had been intended to be eliminated in conventional devices, we demonstrate that uncertain behaviours has become the key to improving computational performance.
Boutkhoum, Omar; Hanine, Mohamed; Agouti, Tarik; Tikniouine, Abdessadek
2015-01-01
In this paper, we examine the issue of strategic industrial location selection in uncertain decision making environments for implanting new industrial corporation. In fact, the industrial location issue is typically considered as a crucial factor in business research field which is related to many calculations about natural resources, distributors, suppliers, customers, and most other things. Based on the integration of environmental, economic and social decisive elements of sustainable development, this paper presents a hybrid decision making model combining fuzzy multi-criteria analysis with analytical capabilities that OLAP systems can provide for successful and optimal industrial location selection. The proposed model mainly consists in three stages. In the first stage, a decision-making committee has been established to identify the evaluation criteria impacting the location selection process. In the second stage, we develop fuzzy AHP software based on the extent analysis method to assign the importance weights to the selected criteria, which allows us to model the linguistic vagueness, ambiguity, and incomplete knowledge. In the last stage, OLAP analysis integrated with multi-criteria analysis employs these weighted criteria as inputs to evaluate, rank and select the strategic industrial location for implanting new business corporation in the region of Casablanca, Morocco. Finally, a sensitivity analysis is performed to evaluate the impact of criteria weights and the preferences given by decision makers on the final rankings of strategic industrial locations.
A new solar power output prediction based on hybrid forecast engine and decomposition model.
Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando
2018-06-12
Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Non-Redfieldian Dynamics Explain Seasonal pCO2 Drawdown in the Gulf of Bothnia
NASA Astrophysics Data System (ADS)
Fransner, Filippa; Gustafsson, Erik; Tedesco, Letizia; Vichi, Marcello; Hordoir, Robinson; Roquet, Fabien; Spilling, Kristian; Kuznetsov, Ivan; Eilola, Kari; Mörth, Carl-Magnus; Humborg, Christoph; Nycander, Jonas
2018-01-01
High inputs of nutrients and organic matter make coastal seas places of intense air-sea CO2 exchange. Due to their complexity, the role of coastal seas in the global air-sea CO2 exchange is, however, still uncertain. Here, we investigate the role of phytoplankton stoichiometric flexibility and extracellular DOC production for the seasonal nutrient and CO2 partial pressure (pCO2) dynamics in the Gulf of Bothnia, Northern Baltic Sea. A 3-D ocean biogeochemical-physical model with variable phytoplankton stoichiometry is for the first time implemented in the area and validated against observations. By simulating non-Redfieldian internal phytoplankton stoichiometry, and a relatively large production of extracellular dissolved organic carbon (DOC), the model adequately reproduces observed seasonal cycles in macronutrients and pCO2. The uptake of atmospheric CO2 is underestimated by 50% if instead using the Redfield ratio to determine the carbon assimilation, as in other Baltic Sea models currently in use. The model further suggests, based on the observed drawdown of pCO2, that observational estimates of organic carbon production in the Gulf of Bothnia, derived with the 14C method, may be heavily underestimated. We conclude that stoichiometric variability and uncoupling of carbon and nutrient assimilation have to be considered in order to better understand the carbon cycle in coastal seas.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
GESAMP Working Group 38, The Atmospheric Input of Chemicals to the Ocean
NASA Astrophysics Data System (ADS)
Duce, Robert; Liss, Peter
2014-05-01
There is growing recognition of the impact of the atmospheric input of both natural and anthropogenic substances on ocean chemistry, biology, and biogeochemistry as well as climate. These inputs are closely related to a number of important global change issues. For example, the increasing input of anthropogenic nitrogen species from the atmosphere to much of the ocean may cause a low level fertilization that could result in an increase in marine 'new' productivity of up to ~3% and thus impact carbon drawdown from the atmosphere. Similarly, much of the oceanic iron, which is a limiting nutrient in significant areas of the ocean, originates from the atmospheric input of minerals as a result of the long-range transport of mineral dust from continental regions. The increased supply of soluble phosphorus from atmospheric anthropogenic sources (through large-scale use of fertilizers) may also have a significant impact on surface-ocean biogeochemistry, but estimates of any effects are highly uncertain. There have been few assessments of the atmospheric inputs of sulfur and nitrogen oxides to the ocean and their impact on the rates of ocean acidification. These inputs may be particularly critical in heavily trafficked shipping lanes and in ocean regions proximate to highly industrialized land areas. Other atmospheric substances may also have an impact on the ocean, in particular lead, cadmium, and POPs. To address these and related issues the United Nations Group of Experts on the Scientific Aspects of Marine Environmental Protection (GESAMP) initiated Working Group 38, The Atmospheric Input of Chemicals to the Ocean, in 2008. This Working Group has had four meetings. To date four peer reviewed papers have been produced from this effort, with a least eight others in the process of being written or published. This paper will discuss some of the results of the Working Group's deliberations and its plans for possible future work.
Feature inference with uncertain categorization: Re-assessing Anderson's rational model.
Konovalova, Elizaveta; Le Mens, Gaël
2017-09-18
A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.
Separation Control Over A Wall-Mounted Hump
NASA Technical Reports Server (NTRS)
Greenblatt, D.; Paschal, K. B.; Schaeffler, N. W.; Washburn, A. E.; Harris, J.; Yao, C. S.
2007-01-01
Separation control by means of steady suction or zero efflux oscillatory jets is known to be effective in a wide variety of flows under different flow conditions. Control is effective when applied in a nominally two-dimensional manner, for example, at the leading-edge of a wing or at the shoulder of a deflected flap. Despite intuitive understanding of the flow, at present there is no accepted theoretical model that can adequately explain or describe the observed effects of the leading parameters such as reduced suction-rate, or frequency and momentum input. This difficulty stems partly from the turbulent nature of the flows combined with superimposed coherent structures, which are usually driven by at least one instability mechanism. The ever increasing technological importance of these flows has spurned an urgent need to develop turbulence models with a predictive capability. Present attempts to develop such models are hampered in one way or another by incomplete data sets, uncertain or undocumented inflow and boundary conditions, or inadequate flow-field measurements. This paper attempts to address these issues by conducting an experimental investigation of a lowspeed separated flow over a wall-mounted hump model. The model geometry was designed by Seifert & Pack, who measured static and dynamic pressures on the model for a wide range of Reynolds and Mach numbers and control conditions. This paper describes the present experimental setup, as well as the types and range of data acquired. Sample data is presented and future work is discussed.
Towards a Fault-based SHA in the Southern Upper Rhine Graben
NASA Astrophysics Data System (ADS)
Baize, Stéphane; Reicherter, Klaus; Thomas, Jessica; Chartier, Thomas; Cushing, Edward Marc
2016-04-01
A brief overview at a seismic map of the Upper Rhine Graben area (say between Strasbourg and Basel) reveals that the region is seismically active. The area has been hit recently by shallow and moderate quakes but, historically, strong quakes damaged and devastated populated zones. Several authors previously suggested, through preliminary geomorphological and geophysical studies, that active faults could be traced along the eastern margin of the graben. Thus, fault-based PSHA (probabilistic seismic hazard assessment) studies should be developed. Nevertheless, most of the input data in fault-based PSHA models are highly uncertain, based upon sparse or hypothetical data. Geophysical and geological data document the presence of post-Tertiary westward dipping faults in the area. However, our first investigations suggest that the available surface fault map do not provide a reliable document of Quaternary fault traces. Slip rate values that can be currently used in fault-PSHA models are based on regional stratigraphic data, but these include neither detailed datings nor clear base surface contours. Several hints on fault activity do exist and we have now relevant tools and techniques to figure out the activity of the faults of concern. Our preliminary analyses suggest that the LiDAR topography can adequately image the fault segments and, thanks to detailed geomorphological analysis, these data allow tracking cumulative fault offsets. Because the fault models can therefore be considered highly uncertain, our coming project for the next 3 years is to acquire and analyze these accurate topographical data, to trace the active faults and to determine slip rates through relevant features dating. Eventually, we plan to find a key site to perform a paleoseismological trench because this approach has been proved to be worth in the Graben, both to the North (Wörms and Strasbourg) and to the South (Basel). This would be done in order to definitely prove whether the faults ruptured the ground surface during the Quaternary, and in order to determine key fault parameters such as magnitude and age of large events.
A neural network model to predict the wastewater inflow incorporating rainfall events.
El-Din, Ahmed Gamal; Smith, Daniel W
2002-03-01
Under steady-state conditions, a wastewater treatment plant usually has a satisfactory performance because these conditions are similar to design conditions. However, load variations constitute a large portion of the operating life of a treatment facility and most of the observed problems in complying with permit requirements occur during these load transients. During storm events upsets to the different physical and biological processes may take place in a wastewater treatment plant, and therefore, the ability to predict the hydraulic load to a treatment facility during such events is very beneficial for the optimization of the treatment process. Most of the hydrologic and hydraulic models describing sewage collection systems are deterministic. Such models require detailed knowledge of the system and usually rely on a large number of parameters, some of which are uncertain or difficult to determine. Presented in this paper, an artificial neural network (ANN) model that is used to make short-term predictions of wastewater inflow rate that enters the Gold Bar Wastewater Treatment Plant (GBWWTP), the largest plant in the Edmonton area (Alberta, Canada). The neural model uses rainfall data, observed in the collection system discharging to the plant, as inputs. The building process of the model was conducted in a systematic way that allowed the identification of a parsimonious model that is able to learn (and not memorize) from past data and generalize very well to unseen data that was used to validate the model. The neural network model gave excellent results. The potential of using the model as part of a real-time process control system is also discussed.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Singh, A.; Behrangi, A.; Fisher, J.; Reager, J. T., II; Gardner, A. S.
2017-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Shukla, S.; Hobbins, M.; McEvoy, D.; Husak, G. J.; Dewes, C.; McNally, A.; Huntington, J. L.; Funk, C. C.; Verdin, J. P.
2016-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
Xu, Zeshui
2007-12-01
Interval utility values, interval fuzzy preference relations, and interval multiplicative preference relations are three common uncertain-preference formats used by decision-makers to provide their preference information in the process of decision making under fuzziness. This paper is devoted in investigating multiple-attribute group-decision-making problems where the attribute values are not precisely known but the value ranges can be obtained, and the decision-makers provide their preference information over attributes by three different uncertain-preference formats i.e., 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first utilize some functions to normalize the uncertain decision matrix and then transform it into an expected decision matrix. We establish a goal-programming model to integrate the expected decision matrix and all three different uncertain-preference formats from which the attribute weights and the overall attribute values of alternatives can be obtained. Then, we use the derived overall attribute values to get the ranking of the given alternatives and to select the best one(s). The model not only can reflect both the subjective considerations of all decision-makers and the objective information but also can avoid losing and distorting the given objective and subjective decision information in the process of information integration. Furthermore, we establish some models to solve the multiple-attribute group-decision-making problems with three different preference formats: 1) utility values; 2) fuzzy preference relations; and 3) multiplicative preference relations. Finally, we illustrate the applicability and effectiveness of the developed models with two practical examples.
NASA Astrophysics Data System (ADS)
Keener, V. W.; Finucane, M.; Brewington, L.
2014-12-01
For the last century, the island of Maui, Hawaii, has been the center of environmental, agricultural, and legal conflict with respect to surface and groundwater allocation. Planning for adequate future freshwater resources requires flexible and adaptive policies that emphasize partnerships and knowledge transfer between scientists and non-scientists. In 2012 the Hawai'i state legislature passed the Climate Change Adaptation Priority Guidelines (Act 286) law requiring county and state policy makers to include island-wide climate change scenarios in their planning processes. This research details the ongoing work by researchers in the NOAA funded Pacific RISA to support the development of Hawaii's first island-wide water use plan under the new climate adaptation directive. This integrated project combines several models with participatory future scenario planning. The dynamically downscaled triply nested Hawaii Regional Climate Model (HRCM) was modified from the WRF community model and calibrated to simulate the many microclimates on the Hawaiian archipelago. For the island of Maui, the HRCM was validated using 20 years of hindcast data, and daily projections were created at a 1 km scale to capture the steep topography and diverse rainfall regimes. Downscaled climate data are input into a USGS hydrological model to quantify groundwater recharge. This model was previously used for groundwater management, and is being expanded utilizing future climate projections, current land use maps and future scenario maps informed by stakeholder input. Participatory scenario planning began in 2012 to bring together a diverse group of over 50 decision-makers in government, conservation, and agriculture to 1) determine the type of information they would find helpful in planning for climate change, and 2) develop a set of scenarios that represent alternative climate/management futures. This is an iterative process, resulting in flexible and transparent narratives at multiple scales. The resulting climate, land use, and groundwater recharge maps give stakeholders a common set of future scenarios that they understand through the participatory scenario process, and identify the vulnerabilities, trade-offs, and adaptive priorities for different groundwater management and land uses in an uncertain future.
Quadratic stabilisability of multi-agent systems under switching topologies
NASA Astrophysics Data System (ADS)
Guan, Yongqiang; Ji, Zhijian; Zhang, Lin; Wang, Long
2014-12-01
This paper addresses the stabilisability of multi-agent systems (MASs) under switching topologies. Necessary and/or sufficient conditions are presented in terms of graph topology. These conditions explicitly reveal how the intrinsic dynamics of the agents, the communication topology and the external control input affect stabilisability jointly. With the appropriate selection of some agents to which the external inputs are applied and the suitable design of neighbour-interaction rules via a switching topology, an MAS is proved to be stabilisable even if so is not for each of uncertain subsystem. In addition, a method is proposed to constructively design a switching rule for MASs with norm-bounded time-varying uncertainties. The switching rules designed via this method do not rely on uncertainties, and the switched MAS is quadratically stabilisable via decentralised external self-feedback for all uncertainties. With respect to applications of the stabilisability results, the formation control and the cooperative tracking control are addressed. Numerical simulations are presented to demonstrate the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
THM and primordial nucleosynthesis: Results and perspectives
NASA Astrophysics Data System (ADS)
Pizzone, R. G.; Spartá, R.; Bertulani, C.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Tumino, A.
2017-09-01
Big Bang Nucleosynthesis (BBN) requires several nuclear physics inputs and nuclear reaction rates. An up-to-date compilation of direct cross sections of d(d,p)t, d(d,n) 3 He and 3 He(d,p) 4 He reactions is given, being these ones among the most uncertain bare-nucleus cross sections. An intense experimental effort has been carried on in the last decade to apply the Trojan Horse Method (THM) to study reactions of relevance for the BBN and measure their astrophysical S( E) -factor. The reaction rates and the relative error for the four reactions of interest are then numerically calculated in the temperature ranges of relevance for BBN ( 0.01
Esralew, Rachel A; Flint, Lorraine; Thorne, James H; Boynton, Ryan; Flint, Alan
2016-07-01
Climate-change adaptation planning for managed wetlands is challenging under uncertain futures when the impact of historic climate variability on wetland response is unquantified. We assessed vulnerability of Modoc National Wildlife Refuge (MNWR) through use of the Basin Characterization Model (BCM) landscape hydrology model, and six global climate models, representing projected wetter and drier conditions. We further developed a conceptual model that provides greater value for water managers by incorporating the BCM outputs into a conceptual framework that links modeled parameters to refuge management outcomes. This framework was used to identify landscape hydrology parameters that reflect refuge sensitivity to changes in (1) climatic water deficit (CWD) and recharge, and (2) the magnitude, timing, and frequency of water inputs. BCM outputs were developed for 1981-2100 to assess changes and forecast the probability of experiencing wet and dry water year types that have historically resulted in challenging conditions for refuge habitat management. We used a Yule's Q skill score to estimate the probability of modeled discharge that best represents historic water year types. CWD increased in all models across 72.3-100 % of the water supply basin by 2100. Earlier timing in discharge, greater cool season discharge, and lesser irrigation season water supply were predicted by most models. Under the worst-case scenario, moderately dry years increased from 10-20 to 40-60 % by 2100. MNWR could adapt by storing additional water during the cool season for later use and prioritizing irrigation of habitats during dry years.
Robust stability for uncertain stochastic fuzzy BAM neural networks with time-varying delays
NASA Astrophysics Data System (ADS)
Syed Ali, M.; Balasubramaniam, P.
2008-07-01
In this Letter, by utilizing the Lyapunov functional and combining with the linear matrix inequality (LMI) approach, we analyze the global asymptotic stability of uncertain stochastic fuzzy Bidirectional Associative Memory (BAM) neural networks with time-varying delays which are represented by the Takagi-Sugeno (TS) fuzzy models. A new class of uncertain stochastic fuzzy BAM neural networks with time varying delays has been studied and sufficient conditions have been derived to obtain conservative result in stochastic settings. The developed results are more general than those reported in the earlier literatures. In addition, the numerical examples are provided to illustrate the applicability of the result using LMI toolbox in MATLAB.
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Robust Unit Commitment Considering Uncertain Demand Response
Liu, Guodong; Tomsovic, Kevin
2014-09-28
Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to themore » uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.« less
Altered brain activation and connectivity during anticipation of uncertain threat in trait anxiety.
Geng, Haiyang; Wang, Yi; Gu, Ruolei; Luo, Yue-Jia; Xu, Pengfei; Huang, Yuxia; Li, Xuebing
2018-06-08
In the research field of anxiety, previous studies generally focus on emotional responses following threat. A recent model of anxiety proposes that altered anticipation prior to uncertain threat is related with the development of anxiety. Behavioral findings have built the relationship between anxiety and distinct anticipatory processes including attention, estimation of threat, and emotional responses. However, few studies have characterized the brain organization underlying anticipation of uncertain threat and its role in anxiety. In the present study, we used an emotional anticipation paradigm with functional magnetic resonance imaging (fMRI) to examine the aforementioned topics by employing brain activation and general psychophysiological interactions (gPPI) analysis. In the activation analysis, we found that high trait anxious individuals showed significantly increased activation in the thalamus, middle temporal gyrus (MTG), and dorsomedial prefrontal cortex (dmPFC), as well as decreased activation in the precuneus, during anticipation of uncertain threat compared to the certain condition. In the gPPI analysis, the key regions including the amygdala, dmPFC, and precuneus showed altered connections with distributed brain areas including the ventromedial prefrontal cortex (vmPFC), dorsolateral prefrontal cortex (dlPFC), inferior parietal sulcus (IPS), insula, para-hippocampus gyrus (PHA), thalamus, and MTG involved in anticipation of uncertain threat in anxious individuals. Taken together, our findings indicate that during the anticipation of uncertain threat, anxious individuals showed altered activations and functional connectivity in widely distributed brain areas, which may be critical for abnormal perception, estimation, and emotion reactions during the anticipation of uncertain threat. © 2018 Wiley Periodicals, Inc.
Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4
NASA Astrophysics Data System (ADS)
Gasore, J.; Prinn, R. G.
2012-12-01
The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a significant variation in response to the variation in input parameters. However, a substantial variation at regional and temporal scale has been found. Tatang M. A., Pan W., Prinn R G., McRae G. J., An efficient method for parametric uncertainty analysis of numerical geophysical models, J. Gephys. Res., 102, 21925-21932, 1997. Cohen, J.B., and R.G. Prinn, Development of a fast, urban chemistry metamodel for inclusion in global models,Atmos. Chem. Phys., 11, 7629-7656, doi:10.5194/acp-11-7629-2011, 2011. Emmons L. K., Walters S., Hess P. G., Lamarque J. -F., P_ster G. G., Fillmore D., Granier C., Guenther A., Kinnison D., Laepple T., Orlando J., Tie X., Tyndall G., Wiedinmyer C., Baughcum S. L., Kloster J. S., Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4). Geosci. Model Dev., 3, 4367, 2010.
Insights from intercomparison of microbial and conventional soil models
NASA Astrophysics Data System (ADS)
Allison, S. D.; Li, J.; Luo, Y.; Mayes, M. A.; Wang, G.
2014-12-01
Changing the structure of soil biogeochemical models to represent coupling between microbial biomass and carbon substrate pools could improve predictions of carbon-climate feedbacks. So-called "microbial models" with this structure make very different predictions from conventional models based on first-order decay of carbon substrate pools. Still, the value of microbial models is uncertain because microbial physiological parameters are poorly constrained and model behaviors have not been fully explored. To address these issues, we developed an approach for inter-comparing microbial and conventional models. We initially focused on soil carbon responses to microbial carbon use efficiency (CUE) and temperature. Three scenarios were implemented in all models at a common reference temperature (20°C): constant CUE (held at 0.31), varied CUE (-0.016°C-1), and 50% acclimated CUE (-0.008°C-1). Whereas the conventional model always showed soil carbon losses with increasing temperature, the microbial models each predicted a temperature threshold above which warming led to soil carbon gain. The location of this threshold depended on CUE scenario, with higher temperature thresholds under the acclimated and constant scenarios. This result suggests that the temperature sensitivity of CUE and the structure of the soil carbon model together regulate the long-term soil carbon response to warming. Compared to the conventional model, all microbial models showed oscillatory behavior in response to perturbations and were much less sensitive to changing inputs. Oscillations were weakest in the most complex model with explicit enzyme pools, suggesting that multi-pool coupling might be a more realistic representation of the soil system. This study suggests that model structure and CUE parameterization should be carefully evaluated when scaling up microbial models to ecosystems and the globe.
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.; Stewart, D. F.; Davis, J. R.
1986-01-01
Space motion sickness clinical characteristics, time course, prediction of susceptibility, and effectiveness of countermeasures were evaluated. Although there is wide individual variability, there appear to be typical patterns of symptom development. The duration of symptoms ranges from several hours to four days with the majority of individuals being symptom free by the end of third day. The etiology of this malady remains uncertain but evidence points to reinterpretation of otolith inputs as being a key factor in the response of the neurovestibular system. Prediction of susceptibility and severity remains unsatisfactory. Countermeasures tried include medications, preflight adaptation, and autogenic feedback training. No countermeasure is entirely successful in eliminating or alleviating symptoms.
Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong
2014-01-01
Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.
Liu, Shuguang; Bond-Lamberty, Ben; Hicke, Jeffrey A.; Vargas, Rodrigo; Zhao, Shuqing; Chen, Jing; Edburg, Steven L.; Hu, Yueming; Liu, Jinxun; McGuire, A. David; Xiao, Jingfeng; Keane, Robert; Yuan, Wenping; Tang, Jianwu; Luo, Yiqi; Potter, Christopher; Oeding, Jennifer
2011-01-01
Forest disturbances greatly alter the carbon cycle at various spatial and temporal scales. It is critical to understand disturbance regimes and their impacts to better quantify regional and global carbon dynamics. This review of the status and major challenges in representing the impacts of disturbances in modeling the carbon dynamics across North America revealed some major advances and challenges. First, significant advances have been made in representation, scaling, and characterization of disturbances that should be included in regional modeling efforts. Second, there is a need to develop effective and comprehensive process‐based procedures and algorithms to quantify the immediate and long‐term impacts of disturbances on ecosystem succession, soils, microclimate, and cycles of carbon, water, and nutrients. Third, our capability to simulate the occurrences and severity of disturbances is very limited. Fourth, scaling issues have rarely been addressed in continental scale model applications. It is not fully understood which finer scale processes and properties need to be scaled to coarser spatial and temporal scales. Fifth, there are inadequate databases on disturbances at the continental scale to support the quantification of their effects on the carbon balance in North America. Finally, procedures are needed to quantify the uncertainty of model inputs, model parameters, and model structures, and thus to estimate their impacts on overall model uncertainty. Working together, the scientific community interested in disturbance and its impacts can identify the most uncertain issues surrounding the role of disturbance in the North American carbon budget and develop working hypotheses to reduce the uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.
1997-12-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less
The development of acoustic experiments for off-campus teaching and learning
NASA Astrophysics Data System (ADS)
Wild, Graham; Swan, Geoff
2011-05-01
In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine software available for free, online. A small number of applications were compared in terms of their interface and functionality, for both the DSO and the FG. The software was then used to investigate standing waves in pipes using the computer-based DSO. Standing wave theory taught in high school and in first year physics is based on a one-dimensional model. With the use of the DSO's fast Fourier transform function, the experimental uncertainly alone was not sufficient to account for the difference observed between the measure and the calculated frequencies. Hence the original experiment was expanded upon to include the end correction effect. The DSO was also used for other simple acoustics experiments, in areas such as the physics of music.
Vidal, Claudia I; Armbrect, Eric A; Andea, Aleodor A; Bohlke, Angela K; Comfere, Nneka I; Hughes, Sarah R; Kim, Jinah; Kozel, Jessica A; Lee, Jason B; Linos, Konstantinos; Litzner, Brandon R; Missall, Tricia A; Novoa, Roberto A; Sundram, Uma; Swick, Brian L; Hurley, M Yadira; Alam, Murad; Argenyi, Zsolt; Duncan, Lyn M; Elston, Dirk M; Emanuel, Patrick O; Ferringer, Tammie; Fung, Maxwell A; Hosler, Gregory A; Lazar, Alexander J; Lowe, Lori; Plaza, Jose A; Prieto, Victor G; Robinson, June K; Schaffer, Andras; Subtil, Antonio; Wang, Wei-Lien
2018-04-21
Appropriate use criteria (AUC) provide physicians guidance in test selection, can affect health care delivery, reimbursement policy, and physician decision-making. The American Society of Dermatopathology (ASDP), with input from the American Academy of Dermatology (AAD) and the College of American Pathologists (CAP), sought to develop AUC in dermatopathology. The RAND/UCLA appropriateness methodology, which combines evidence-based medicine, clinical experience and expert judgment, was used to develop AUC in dermatopathology. With the number of ratings predetermined at 3, AUC were developed for 211 clinical scenarios (CS) involving 12 ancillary studies (AS). Consensus was reached for 188 (89%) CS, with 93 (44%) considered "usually appropriate", 52 (25%) "rarely appropriate", and 43 (20%) "uncertain appropriateness". The methodology requires a focus on appropriateness without comparison between tests and irrespective of cost. The ultimate decision of when to order specific test rests with the physician and is one where the expected benefit exceeds the negative consequences. This publication outlines the recommendation of appropriateness - AUC for 12 tests used in dermatopathology. Importantly, these recommendations may change considering new evidence. Results deemed "uncertain appropriateness" and where consensus was not reached may benefit from further research. Copyright © 2018. Published by Elsevier Inc.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Intelligent diagnosis of jaundice with dynamic uncertain causality graph model.
Hao, Shao-Rui; Geng, Shi-Chao; Fan, Lin-Xiao; Chen, Jia-Jia; Zhang, Qin; Li, Lan-Juan
2017-05-01
Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A "chaining" inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure.
Assimilating uncertain, dynamic and intermittent streamflow observations in hydrological models
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon-Hurtado, Juan; Solomatine, Dimitri
2015-09-01
Catastrophic floods cause significant socio-economical losses. Non-structural measures, such as real-time flood forecasting, can potentially reduce flood risk. To this end, data assimilation methods have been used to improve flood forecasts by integrating static ground observations, and in some cases also remote sensing observations, within water models. Current hydrologic and hydraulic research works consider assimilation of observations coming from traditional, static sensors. At the same time, low-cost, mobile sensors and mobile communication devices are becoming also increasingly available. The main goal and innovation of this study is to demonstrate the usefulness of assimilating uncertain streamflow observations that are dynamic in space and intermittent in time in the context of two different semi-distributed hydrological model structures. The developed method is applied to the Brue basin, where the dynamic observations are imitated by the synthetic observations of discharge. The results of this study show how model structures and sensors locations affect in different ways the assimilation of streamflow observations. In addition, it proves how assimilation of such uncertain observations from dynamic sensors can provide model improvements similar to those of streamflow observations coming from a non-optimal network of static physical sensors. This can be a potential application of recent efforts to build citizen observatories of water, which can make the citizens an active part in information capturing, evaluation and communication, helping simultaneously to improvement of model-based flood forecasting.
Intelligent diagnosis of jaundice with dynamic uncertain causality graph model*
Hao, Shao-rui; Geng, Shi-chao; Fan, Lin-xiao; Chen, Jia-jia; Zhang, Qin; Li, Lan-juan
2017-01-01
Jaundice is a common and complex clinical symptom potentially occurring in hepatology, general surgery, pediatrics, infectious diseases, gynecology, and obstetrics, and it is fairly difficult to distinguish the cause of jaundice in clinical practice, especially for general practitioners in less developed regions. With collaboration between physicians and artificial intelligence engineers, a comprehensive knowledge base relevant to jaundice was created based on demographic information, symptoms, physical signs, laboratory tests, imaging diagnosis, medical histories, and risk factors. Then a diagnostic modeling and reasoning system using the dynamic uncertain causality graph was proposed. A modularized modeling scheme was presented to reduce the complexity of model construction, providing multiple perspectives and arbitrary granularity for disease causality representations. A “chaining” inference algorithm and weighted logic operation mechanism were employed to guarantee the exactness and efficiency of diagnostic reasoning under situations of incomplete and uncertain information. Moreover, the causal interactions among diseases and symptoms intuitively demonstrated the reasoning process in a graphical manner. Verification was performed using 203 randomly pooled clinical cases, and the accuracy was 99.01% and 84.73%, respectively, with or without laboratory tests in the model. The solutions were more explicable and convincing than common methods such as Bayesian Networks, further increasing the objectivity of clinical decision-making. The promising results indicated that our model could be potentially used in intelligent diagnosis and help decrease public health expenditure. PMID:28471111
Sense of agency is related to gamma band coupling in an inferior parietal-preSMA circuitry
Ritterband-Rosenbaum, Anina; Nielsen, Jens B.; Christensen, Mark S.
2014-01-01
In the present study we tested whether sense of agency (SoA) is reflected by changes in coupling between right medio-frontal/supplementary motor area (SMA) and inferior parietal cortex (IPC). Twelve healthy adult volunteers participated in the study. They performed a variation of a line-drawing task (Nielsen, 1963; Fourneret and Jeannerod, 1998), in which they moved a cursor on a digital tablet with their right hand without seeing the hand. Visual feedback displayed on a computer monitor was either in correspondence with or deviated from the actual movement. This made participants uncertain as to the agent of the movement and they reported SoA in approximately 50% of trials when the movement was computer-generated. We tested whether IPC-preSMA coupling was associated with SoA, using dynamic causal modeling (DCM) for induced responses (Chen et al., 2008; Herz et al., 2012). Nine different DCMs were constructed for the early and late phases of the task, respectively. All models included two regions: a superior medial gyrus (preSMA) region and a right supramarginal gyrus (IPC) region. Bayesian models selection (Stephan et al., 2009) favored a model with input to IPC and modulation of the forward connection to SMA in the late task phase, and a model with input to preSMA and modulation of the backward connection was favored for the early task phase. The analysis shows that IPC source activity in the 50–60 Hz range modulated preSMA source activity in the 40–70 Hz range in the presence of SoA compared with no SoA in the late task phase, but the test of the early task phase did not reveal any differences between presence and absence of SoA. We show that SoA is associated with a directionally specific between frequencies coupling from IPC to preSMA in the higher gamma (ɣ) band in the late task phase. This suggests that SoA is a retrospective perception, which is highly dependent on interpretation of the outcome of the performed action. PMID:25076883
Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach
2014-01-01
Paper DS-14-1028 to appear in the Special Issue on Stochastic Models, Control and Algorithms in Robotics, ASME Journal of Dynamic Systems...Measurement and Control Robot Path Planning in Uncertain Environments: A Language Measure-theoretic Approach⋆ Devesh K. Jha† Yue Li† Thomas A. Wettergren‡† Asok...algorithm, called ν⋆, that was formulated in the framework of probabilistic finite state automata (PFSA) and language measure from a control -theoretic
A Black-Scholes Approach to Satisfying the Demand in a Failure-Prone Manufacturing System
NASA Technical Reports Server (NTRS)
Chavez-Fuentes, Jorge R.; Gonzalex, Oscar R.; Gray, W. Steven
2007-01-01
The goal of this paper is to use a financial model and a hedging strategy in a systems application. In particular, the classical Black-Scholes model, which was developed in 1973 to find the fair price of a financial contract, is adapted to satisfy an uncertain demand in a manufacturing system when one of two production machines is unreliable. This financial model together with a hedging strategy are used to develop a closed formula for the production strategies of each machine. The strategy guarantees that the uncertain demand will be met in probability at the final time of the production process. It is assumed that the production efficiency of the unreliable machine can be modeled as a continuous-time stochastic process. Two simple examples illustrate the result.
A topo-graph model for indistinct target boundary definition from anatomical images.
Cui, Hui; Wang, Xiuying; Zhou, Jianlong; Gong, Guanzhong; Eberl, Stefan; Yin, Yong; Wang, Lisheng; Feng, Dagan; Fulham, Michael
2018-06-01
It can be challenging to delineate the target object in anatomical imaging when the object boundaries are difficult to discern due to the low contrast or overlapping intensity distributions from adjacent tissues. We propose a topo-graph model to address this issue. The first step is to extract a topographic representation that reflects multiple levels of topographic information in an input image. We then define two types of node connections - nesting branches (NBs) and geodesic edges (GEs). NBs connect nodes corresponding to initial topographic regions and GEs link the nodes at a detailed level. The weights for NBs are defined to measure the similarity of regional appearance, and weights for GEs are defined with geodesic and local constraints. NBs contribute to the separation of topographic regions and the GEs assist the delineation of uncertain boundaries. Final segmentation is achieved by calculating the relevance of the unlabeled nodes to the labels by the optimization of a graph-based energy function. We test our model on 47 low contrast CT studies of patients with non-small cell lung cancer (NSCLC), 10 contrast-enhanced CT liver cases and 50 breast and abdominal ultrasound images. The validation criteria are the Dice's similarity coefficient and the Hausdorff distance. Student's t-test show that our model outperformed the graph models with pixel-only, pixel and regional, neighboring and radial connections (p-values <0.05). Our findings show that the topographic representation and topo-graph model provides improved delineation and separation of objects from adjacent tissues compared to the tested models. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
MU, J.; Antle, J. M.; Zhang, H.; Capalbo, S. M.; Eigenbrode, S.; Kruger, C.; Stockle, C.; Wolfhorst, J. D.
2013-12-01
Representative Agricultural Pathways (RAPs) are projections of plausible future biophysical and socio-economic conditions used to carry out climate impact assessments for agriculture. The development of RAPs iss motivated by the fact that the various global and regional models used for agricultural climate change impact assessment have been implemented with individualized scenarios using various data and model structures, often without transparent documentation or public availability. These practices have hampered attempts at model inter-comparison, improvement, and synthesis of model results across studies. This paper aims to (1) present RAPs developed for the principal wheat-producing region of the Pacific Northwest, and to (2) combine these RAPs with downscaled climate data, crop model simulations and economic model simulations to assess climate change impacts on winter wheat production and farm income. This research was carried out as part of a project funded by the USDA known as the Regional Approaches to Climate Change in the Pacific Northwest (REACCH). The REACCH study region encompasses the major winter wheat production area in Pacific Northwest and preliminary research shows that farmers producing winter wheat could benefit from future climate change. However, the future world is uncertain in many dimensions, including commodity and input prices, production technology, and policies, as well as increased probability of disturbances (pests and diseases) associated with a changing climate. Many of these factors cannot be modeled, so they are represented in the regional RAPS. The regional RAPS are linked to global agricultural and shared social-economic pathways, and used along with climate change projections to simulate future outcomes for the wheat-based farms in the REACCH region.
NASA Astrophysics Data System (ADS)
Jin, Chenhao; Li, Jingcheng; Jang, Shinae; Sun, Xiaorong; Christenson, Richard
2015-03-01
Structural health monitoring has drawn significant attention in the past decades with numerous methodologies and applications for civil structural systems. Although many researchers have developed analytical and experimental damage detection algorithms through vibration-based methods, these methods are not widely accepted for practical structural systems because of their sensitivity to uncertain environmental and operational conditions. The primary environmental factor that influences the structural modal properties is temperature. The goal of this article is to analyze the natural frequency-temperature relationships and detect structural damage in the presence of operational and environmental variations using modal-based method. For this purpose, correlations between natural frequency and temperature are analyzed to select proper independent variables and inputs for the multiple linear regression model and neural network model. In order to capture the changes of natural frequency, confidence intervals to detect the damages for both models are generated. A long-term structural health monitoring system was installed on an in-service highway bridge located in Meriden, Connecticut to obtain vibration and environmental data. Experimental testing results show that the variability of measured natural frequencies due to temperature is captured, and the temperature-induced changes in natural frequencies have been considered prior to the establishment of the threshold in the damage warning system. This novel approach is applicable for structural health monitoring system and helpful to assess the performance of the structure for bridge management and maintenance.
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.
Permafrost carbon cycles under multifactor global change: a modeling analysis
NASA Astrophysics Data System (ADS)
Li, J.; Natali, S.; Schaedel, C.; Schuur, E. A.; Luo, Y.
2012-12-01
Carbon dioxide (CO2) and methane (CH4) from permafrost zones are projected to be elevated under global change scenarios, but the magnitude and spatiotemporal variation of these greenhouse gas sources are still highly uncertain. Here we implement and evaluate the integration of a methane model into the Community Atmosphere-Biosphere Land Exchange model (CABLE v1.5 of CSIRO, Australia) in order to explore the carbon emissions under warming, elevated CO2 and altered precipitation. The weather data was obtained from a tundra site named eight mile lake in Alaska and the data of years 2004-2009 was used to tune and validate the model. First, data obtained from measurement were transformed to meet the input weather data required by the model. Second, model parameters regarding vegetation and soil were modified to accurately simulate the permafrost site. For example, we modified the resistivity of soil in the model so that the modeled energy balance was found to match with the observations. Currently, the modeled NPP are relatively higher but soil temperature is lower than the observations. Third, a new methane module is being integrated into the model. We simulate the methane production, oxidation and emission processes (ebullition, diffusion and plant-aided transport). We test new functions for soil pH and redox potential that impact microbial methane production and oxidation in soils. We link water table position (WTP) with the available amount of decomposable carbon for methanogens, in combination with spatially explicit simulation of soil temperature. We also validated the model and resolved the discrepancy between the model and observation. In this presentation, we will describe results of simulations to forecast CO2 and CH4 fluxes under climate change scenarios.
Two whisker motor areas in the rat cortex: evidence from thalamocortical connections.
Mohammed, Hisham; Jain, Neeraj
2014-02-15
In primates, the motor cortex consists of at least seven different areas, which are involved in movement planning, coordination, initiation, and execution. However, for rats, only the primary motor cortex has been well described. A rostrally located second motor area has been proposed, but its extent, organization, and even definitive existence remain uncertain. Only a rostral forelimb area (RFA) has been definitively described, besides few reports of a rostral hindlimb area. We have previously proposed existence of a second whisker area, which we termed the rostral whisker area (RWA), based on its differential response to intracortical microstimulation compared with the caudal whisker area (CWA) in animals under deep anesthesia (Tandon et al. [2008] Eur J Neurosci 27:228). To establish that RWA is distinct from the caudally contiguous CWA, we determined sources of thalamic inputs to the two proposed whisker areas. Sources of inputs to RFA, caudal forelimb area (CFA), and caudal hindlimb region were determined for comparison. The results show that RWA and CWA can be distinguished based on differences in their thalamic inputs. RWA receives major projections from mediodorsal and ventromedial nuclei, whereas the major projections to CWA are from the ventral anterior, ventrolateral, and posterior nuclei. Moreover, the thalamic nuclei that provide major inputs to RWA are the same as for RFA, and the nuclei projecting to CWA are same as for CFA. The results suggest that rats have a second rostrally located motor area with RWA and RFA as its constituents. Copyright © 2013 Wiley Periodicals, Inc.
Satoh, Akira; Makanae, Aki; Nishimoto, Yurie; Mitogawa, Kazumasa
2016-09-01
Urodele amphibians have a remarkable organ regeneration ability that is regulated by neural inputs. The identification of these neural inputs has been a challenge. Recently, Fibroblast growth factor (Fgf) and Bone morphogenic protein (Bmp) were shown to substitute for nerve functions in limb and tail regeneration in urodele amphibians. However, direct evidence of Fgf and Bmp being secreted from nerve endings and regulating regeneration has not yet been shown. Thus, it remained uncertain whether they were the nerve factors responsible for successful limb regeneration. To gather experimental evidence, the technical difficulties involved in the usage of axolotls had to be overcome. We achieved this by modifying the electroporation method. When Fgf8-AcGFP or Bmp7-AcGFP was electroporated into the axolotl dorsal root ganglia (DRG), GFP signals were detectable in the regenerating limb region. This suggested that Fgf8 and Bmp7 synthesized in neural cells in the DRG were delivered to the limbs through the long axons. Further knockdown experiments with double-stranded RNA interference resulted in impaired limb regeneration ability. These results strongly suggest that Fgf and Bmp are the major neural inputs that control the organ regeneration ability. Copyright © 2016 Elsevier Inc. All rights reserved.
A Method for Snow Reanalysis: The Sierra Nevada (USA) Example
NASA Technical Reports Server (NTRS)
Girotto, Manuela; Margulis, Steven; Cortes, Gonzalo; Durand, Michael
2017-01-01
This work presents a state-of-the art methodology for constructing snow water equivalent (SWE) reanalysis. The method is comprised of two main components: (1) a coupled land surface model and snow depletion curve model, which is used to generate an ensemble of predictions of SWE and snow cover area for a given set of (uncertain) inputs, and (2) a reanalysis step, which updates estimation variables to be consistent with the satellite observed depletion of the fractional snow cover time series. This method was applied over the Sierra Nevada (USA) based on the assimilation of remotely sensed fractional snow covered area data from the Landsat 5-8 record (1985-2016). The verified dataset (based on a comparison with over 9000 station years of in situ data) exhibited mean and root-mean-square errors less than 3 and 13 cm, respectively, and correlation greater than 0.95 compared with in situ SWE observations. The method (fully Bayesian), resolution (daily, 90-meter), temporal extent (31 years), and accuracy provide a unique dataset for investigating snow processes. This presentation illustrates how the reanalysis dataset was used to provide a basic accounting of the stored snowpack water in the Sierra Nevada over the last 31 years and ultimately improve real-time streamflow predictions.
Signatures of Heavy Element Production in Neutron Star Mergers
NASA Astrophysics Data System (ADS)
Barnes, Jennifer
2018-06-01
Compact object mergers involving at least one neutron star have long been theorized to be sites of astrophysical nucleosynthesis via rapid neutron capture (the r-process). The observation in light and gravitational waves of the first neutron star merger (GW1701817) this past summer provided a stunning confirmation of this theory. Electromagnetic emission powered by the radioactive decay of freshly synthesized nuclei from mergers encodes information about the composition burned by the r-process, including whether a particular merger event synthesized the heaviest nuclei along the r-process path, or froze out at lower mass number. However, efforts to model the emission in detail must still contend with many uncertainties. For instance, the uncertain nuclear masses far from the valley of stability influence the final composition burned by the r-process, as will weak interactions operating in the merger’s immediate aftermath. This in turn can affect the color electromagnetic emission. Understanding the details of these transients’ spectra will also require a detailed accounting the electronic transitions of r-process elements and ions, in order to identify the strong transitions that underlie spectral formation. This talk will provide an overview of our current understanding of radioactive transients from mergers, with an emphasis on the role of experiment in providing critical inputs for models and reducing uncertainty.
Uncertainties of fluxes and 13C / 12C ratios of atmospheric reactive-gas emissions
NASA Astrophysics Data System (ADS)
Gromov, Sergey; Brenninkmeijer, Carl A. M.; Jöckel, Patrick
2017-07-01
We provide a comprehensive review of the proxy data on the 13C / 12C ratios and uncertainties of emissions of reactive carbonaceous compounds into the atmosphere, with a focus on CO sources. Based on an evaluated set-up of the EMAC model, we derive the isotope-resolved data set of its emission inventory for the 1997-2005 period. Additionally, we revisit the calculus required for the correct derivation of uncertainties associated with isotope ratios of emission fluxes. The resulting δ13C of overall surface CO emission in 2000 of -(25. 2 ± 0. 7) ‰ is in line with previous bottom-up estimates and is less uncertain by a factor of 2. In contrast to this, we find that uncertainties of the respective inverse modelling estimates may be substantially larger due to the correlated nature of their derivation. We reckon the δ13C values of surface emissions of higher hydrocarbons to be within -24 to -27 ‰ (uncertainty typically below ±1 ‰), with an exception of isoprene and methanol emissions being close to -30 and -60 ‰, respectively. The isotope signature of ethane surface emission coincides with earlier estimates, but integrates very different source inputs. δ13C values are reported relative to V-PDB.
Lee, S.-Y.; Barnes, C.G.; Snoke, A.W.; Howard, K.A.; Frost, C.D.
2003-01-01
Two groups of closely associated, peraluminous, two-mica granitic gneiss were identified in the area. The older, sparsely distributed unit is equigranular (EG) with initial ??Nd ??? -8??8 and initial 87Sr/86Sr ???0??7098. Its age is uncertain. The younger unit is Late Cretaceous (???80 Ma), pegmatitic, and sillimanite-bearing (KPG), with ??Nd from -15??8 to -17??3 and initial 87Sr/86Sr from 0??7157 to 0??7198. The concentrations of Fe, Mg, Na, Ca, Sr, V, Zr, Zn and Hf are higher, and K, Rb and Th are lower in the EG. Major- and trace-element models indicate that the KPG was derived by muscovite dehydration melting (<35 km depth) of Neoproterozoic metapelitic rocks that are widespread in the eastern Great Basin. The models are broadly consistent with anatexis of crust tectonically thickened during the Sevier orogeny; no mantle mass or heat contribution was necessary. As such, this unit represents one crustal end-member of regional Late Cretaceous peraluminous granites. The EG was produced by biotite dehydration melting at greater depths, with garnet stable in the residue. The source of the EG was probably Paleoproterozoic metagraywacke. Because EG magmatism probably pre-dated Late Cretaceous crustal thickening, it required heat input from the mantle or from mantle-derived magma.
NASA Astrophysics Data System (ADS)
Ataei-Esfahani, Armin
In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
Characterizing Topology of Probabilistic Biological Networks.
Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2013-09-06
Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.
Robust optimization modelling with applications to industry and environmental problems
NASA Astrophysics Data System (ADS)
Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman
2017-10-01
Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.
An Analytical Framework for Soft and Hard Data Fusion: A Dempster-Shafer Belief Theoretic Approach
2012-08-01
fusion. Therefore, we provide a detailed discussion on uncertain data types, their origins and three uncertainty pro- cessing formalisms that are popular...suitable membership functions corresponding to the fuzzy sets. 3.2.3 DS Theory The DS belief theory, originally proposed by Dempster, can be thought of as... originated and various imperfections of the source. Uncertainty handling formalisms provide techniques for modeling and working with these uncertain data types
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tian, Di; Hu, Yongtao; Wang, Yuhang; Boylan, James W; Zheng, Mei; Russell, Armistead G
2009-01-15
Biomass burning is a major and growing contributor to particulate matter with an aerodynamic diameter less than 2.5 microm (PM2.5). Such impacts (especially individual impacts from each burning source) are quantified using the Community Multiscale Air Quality (CMAQ) Model, a chemical transport model (CTM). Given the sensitivity of CTM results to uncertain emission inputs, simulations were conducted using three biomass burning inventories. Shortcomings in the burning emissions were also evaluated by comparing simulations with observations and results from a receptor model. Model performance improved significantly with the updated emissions and speciation profiles based on recent measurements for biomass burning: mean fractional bias is reduced from 22% to 4% for elemental carbon and from 18% to 12% for organic matter; mean fractional error is reduced from 59% to 50% for elemental carbon and from 55% to 49% for organic matter. Quantified impacts of biomass burning on PM2.5 during January, March, May, and July 2002 are 3.0, 5.1, 0.8, and 0.3 microg m(-3) domainwide on average, with more than 80% of such impacts being from primary emissions. Impacts of prescribed burning dominate biomass burning impacts, contributing about 55% and 80% of PM2.5 in January and March, respectively, followed by land clearing and agriculture field burning. Significant impacts of wildfires in May and residential wood combustion in fireplaces and woodstoves in January are also found.
Accounting for system dynamics in reserve design.
Leroux, Shawn J; Schmiegelow, Fiona K A; Cumming, Steve G; Lessard, Robert B; Nagy, John
2007-10-01
Systematic conservation plans have only recently considered the dynamic nature of ecosystems. Methods have been developed to incorporate climate change, population dynamics, and uncertainty in reserve design, but few studies have examined how to account for natural disturbance. Considering natural disturbance in reserve design may be especially important for the world's remaining intact areas, which still experience active natural disturbance regimes. We developed a spatially explicit, dynamic simulation model, CONSERV, which simulates patch dynamics and fire, and used it to evaluate the efficacy of hypothetical reserve networks in northern Canada. We designed six networks based on conventional reserve design methods, with different conservation targets for woodland caribou habitat, high-quality wetlands, vegetation, water bodies, and relative connectedness. We input the six reserve networks into CONSERV and tracked the ability of each to maintain initial conservation targets through time under an active natural disturbance regime. None of the reserve networks maintained all initial targets, and some over-represented certain features, suggesting that both effectiveness and efficiency of reserve design could be improved through use of spatially explicit dynamic simulation during the planning process. Spatial simulation models of landscape dynamics are commonly used in natural resource management, but we provide the first illustration of their potential use for reserve design. Spatial simulation models could be used iteratively to evaluate competing reserve designs and select targets that have a higher likelihood of being maintained through time. Such models could be combined with dynamic planning techniques to develop a general theory for reserve design in an uncertain world.
NASA Astrophysics Data System (ADS)
Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan
2017-04-01
More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.
Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic
NASA Astrophysics Data System (ADS)
Haag, T.; Herrmann, J.; Hanss, M.
2010-10-01
For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.
NASA Astrophysics Data System (ADS)
Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team
2017-12-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.
Carbon Dioxide Physiological Forcing Dominates Projected Eastern Amazonian Drying
NASA Astrophysics Data System (ADS)
Richardson, T. B.; Forster, P. M.; Andrews, T.; Boucher, O.; Faluvegi, G.; Fläschner, D.; Kasoar, M.; Kirkevâg, A.; Lamarque, J.-F.; Myhre, G.; Olivié, D.; Samset, B. H.; Shawki, D.; Shindell, D.; Takemura, T.; Voulgarakis, A.
2018-03-01
Future projections of east Amazonian precipitation indicate drying, but they are uncertain and poorly understood. In this study we analyze the Amazonian precipitation response to individual atmospheric forcings using a number of global climate models. Black carbon is found to drive reduced precipitation over the Amazon due to temperature-driven circulation changes, but the magnitude is uncertain. CO2 drives reductions in precipitation concentrated in the east, mainly due to a robustly negative, but highly variable in magnitude, fast response. We find that the physiological effect of CO2 on plant stomata is the dominant driver of the fast response due to reduced latent heating and also contributes to the large model spread. Using a simple model, we show that CO2 physiological effects dominate future multimodel mean precipitation projections over the Amazon. However, in individual models temperature-driven changes can be large, but due to little agreement, they largely cancel out in the model mean.
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
The large-scale freshwater cycle of the Arctic
NASA Astrophysics Data System (ADS)
Serreze, Mark C.; Barrett, Andrew P.; Slater, Andrew G.; Woodgate, Rebecca A.; Aagaard, Knut; Lammers, Richard B.; Steele, Michael; Moritz, Richard; Meredith, Michael; Lee, Craig M.
2006-11-01
This paper synthesizes our understanding of the Arctic's large-scale freshwater cycle. It combines terrestrial and oceanic observations with insights gained from the ERA-40 reanalysis and land surface and ice-ocean models. Annual mean freshwater input to the Arctic Ocean is dominated by river discharge (38%), inflow through Bering Strait (30%), and net precipitation (24%). Total freshwater export from the Arctic Ocean to the North Atlantic is dominated by transports through the Canadian Arctic Archipelago (35%) and via Fram Strait as liquid (26%) and sea ice (25%). All terms are computed relative to a reference salinity of 34.8. Compared to earlier estimates, our budget features larger import of freshwater through Bering Strait and larger liquid phase export through Fram Strait. While there is no reason to expect a steady state, error analysis indicates that the difference between annual mean oceanic inflows and outflows (˜8% of the total inflow) is indistinguishable from zero. Freshwater in the Arctic Ocean has a mean residence time of about a decade. This is understood in that annual freshwater input, while large (˜8500 km3), is an order of magnitude smaller than oceanic freshwater storage of ˜84,000 km3. Freshwater in the atmosphere, as water vapor, has a residence time of about a week. Seasonality in Arctic Ocean freshwater storage is nevertheless highly uncertain, reflecting both sparse hydrographic data and insufficient information on sea ice volume. Uncertainties mask seasonal storage changes forced by freshwater fluxes. Of flux terms with sufficient data for analysis, Fram Strait ice outflow shows the largest interannual variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less
Microgravity Isolation Control System Design Via High-Order Sliding Mode Control
NASA Technical Reports Server (NTRS)
Shkolnikov, Ilya; Shtessel, Yuri; Whorton, Mark S.; Jackson, Mark
2000-01-01
Vibration isolation control system design for a microgravity experiment mount is considered. The controller design based on dynamic sliding manifold (DSM) technique is proposed to attenuate the accelerations transmitted to an isolated experiment mount either from a vibrating base or directly generated by the experiment, as well as to stabilize the internal dynamics of this nonminimum phase plant. An auxiliary DSM is employed to maintain the high-order sliding mode on the primary sliding manifold in the presence of uncertain actuator dynamics of second order. The primary DSM is designed for the closed-loop system in sliding mode to be a filter with given characteristics with respect to the input external disturbances.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
NASA Astrophysics Data System (ADS)
Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun
2017-07-01
In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong; ...
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Metz, P. A.
2014-12-01
Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Adaptive Controller Effects on Pilot Behavior
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.; Hempley, Lucas E.
2014-01-01
Adaptive control provides robustness and resilience for highly uncertain, and potentially unpredictable, flight dynamics characteristic. Some of the recent flight experiences of pilot-in-the-loop with an adaptive controller have exhibited unpredicted interactions. In retrospect, this is not surprising once it is realized that there are now two adaptive controllers interacting, the software adaptive control system and the pilot. An experiment was conducted to categorize these interactions on the pilot with an adaptive controller during control surface failures. One of the objectives of this experiment was to determine how the adaptation time of the controller affects pilots. The pitch and roll errors, and stick input increased for increasing adaptation time and during the segment when the adaptive controller was adapting. Not surprisingly, altitude, cross track and angle deviations, and vertical velocity also increase during the failure and then slowly return to pre-failure levels. Subjects may change their behavior even as an adaptive controller is adapting with additional stick inputs. Therefore, the adaptive controller should adapt as fast as possible to minimize flight track errors. This will minimize undesirable interactions between the pilot and the adaptive controller and maintain maneuvering precision.
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vámos, Tibor
The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.
Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.
Herrero, David; Martínez, Humberto
2011-01-01
This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply.
Li, Kebai; Ma, Tianyi; Wei, Guo
2018-03-31
As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply
Li, Kebai; Ma, Tianyi; Wei, Guo
2018-01-01
As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems. PMID:29614749
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Hydrological simulation of the Brahmaputra basin using global datasets
NASA Astrophysics Data System (ADS)
Bhattacharya, Biswa; Conway, Crystal; Craven, Joanne; Masih, Ilyas; Mazzolini, Maurizio; Shrestha, Shreedeepy; Ugay, Reyne; van Andel, Schalk Jan
2017-04-01
Brahmaputra River flows through China, India and Bangladesh to the Bay of Bengal and is one of the largest rivers of the world with a catchment size of 580K km2. The catchment is largely hilly and/or forested with sparse population and with limited urbanisation and economic activities. The catchment experiences heavy monsoon rainfall leading to very high flood discharges. Large inter-annual variation of discharge leading to flooding, erosion and morphological changes are among the major challenges. The catchment is largely ungauged; moreover, limited availability of hydro-meteorological data limits the possibility of carrying out evidence based research, which could provide trustworthy information for managing and when needed, controlling, the basin processes by the riparian countries for overall basin development. The paper presents initial results of a current research project on Brahmaputra basin. A set of hydrological and hydraulic models (SWAT, HMS, RAS) are developed by employing publicly available datasets of DEM, land use and soil and simulated using satellite based rainfall products, evapotranspiration and temperature estimates. Remotely sensed data are compared with sporadically available ground data. The set of models are able to produce catchment wide hydrological information that potentially can be used in the future in managing the basin's water resources. The model predications should be used with caution due to high level of uncertainty because the semi-calibrated models are developed with uncertain physical representation (e.g. cross-section) and simulated with global meteorological forcing (e.g. TRMM) with limited validation. Major scientific challenges are seen in producing robust information that can be reliably used in managing the basin. The information generated by the models are uncertain and as a result, instead of using them per se, they are used in improving the understanding of the catchment, and by running several scenarios with varying catchment conditions the catchment dynamics is explored. Objectives are set that suit the data availability. For example, patterns (e.g., variation of rainfall in the lower basin) and aggregates/averages (seasonal averages) are preferred over point information. Instead of simulating instantaneous flood propagation flood extent corresponding to a frequency is followed. As satellite rainfall products may be erroneous so a variety of satellite based products are used as ensemble input. Satellite rainfall estimates are corrected for bias and different rainfall products are aggregated in a data fusion framework. Finally, the linkages between catchment erosion, hydrology and morphological changes are investigated and validated with remote sensing imageries. Keywords: Brahmaputra, hydrology, TRMM, data fusion, ungauged basin.
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng
2010-10-01
Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.
Imprecision and Uncertainty in the UFO Database Model.
ERIC Educational Resources Information Center
Van Gyseghem, Nancy; De Caluwe, Rita
1998-01-01
Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Mackinger, Barbara; Jonas, Eva; Mühlberger, Christina
2017-01-01
When making financial decisions bank customers are confronted with two types of uncertainty: first, return on investments is uncertain and there is a risk of losing money. Second, customers cannot be certain about their financial advisor's true intentions. This might decrease customers' willingness to cooperate with advisors. However, the uncertainty management model and fairness heuristic theory predict that in uncertain situations customers are willing to cooperate with financial advisors when they perceive fairness. In the current study, we investigated how perceived fairness in the twofold uncertain situations increased people's intended future cooperation with an advisor. We asked customers of financial consultancies about their experienced uncertainty regarding both the investment decision and the advisor's intentions. Moreover, we asked them about their perceived fairness, as well as their intention to cooperate with the advisor in the future. A three-way moderation analysis showed that customers who faced high uncertainty regarding the investment decision and high uncertainty regarding the advisor's true intentions indicated the lowest intended cooperation with the advisor but high fairness increased their cooperation. Interestingly, when people were only uncertain about the advisor's intentions (but certain about the decision) they indicated less cooperation than when they were only uncertain about the decision (but certain about the advisor's intentions). A mediated moderation analysis revealed that this relationship was explained by customers' lower trust in their advisors.
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
Bartha, Erzsebet; Davidson, Thomas; Brodtkorb, Thor-Henrik; Carlsson, Per; Kalman, Sigridur
2013-07-09
A randomized, controlled trial, intended to include 460 patients, is currently studying peroperative goal-directed hemodynamic treatment (GDHT) of aged hip-fracture patients. Interim efficacy analysis performed on the first 100 patients was statistically uncertain; thus, the trial is continuing in accordance with the trial protocol. This raised the present investigation's main question: Is it reasonable to continue to fund the trial to decrease uncertainty? To answer this question, a previously developed probabilistic cost-effectiveness model was used. That model depicts (1) a choice between routine fluid treatment and GDHT, given uncertainty of current evidence and (2) the monetary value of further data collection to decrease uncertainty. This monetary value, that is, the expected value of perfect information (EVPI), could be used to compare future research costs. Thus, the primary aim of the present investigation was to analyze EVPI of an ongoing trial with interim efficacy observed. A previously developed probabilistic decision analytic cost-effectiveness model was employed to compare the routine fluid treatment to GDHT. Results from the interim analysis, published trials, the meta-analysis, and the registry data were used as model inputs. EVPI was predicted using (1) combined uncertainty of model inputs; (2) threshold value of society's willingness to pay for one, quality-adjusted life-year; and (3) estimated number of future patients exposed to choice between GDHT and routine fluid treatment during the expected lifetime of GDHT. If a decision to use GDHT were based on cost-effectiveness, then the decision would have a substantial degree of uncertainty. Assuming a 5-year lifetime of GDHT in clinical practice, the number of patients who would be subject to future decisions was 30,400. EVPI per patient would be €204 at a €20,000 threshold value of society's willingness to pay for one quality-adjusted life-year. Given a future population of 30,400 individuals, total EVPI would be €6.19 million. If future trial costs are below EVPI, further data collection is potentially cost-effective. When applying a cost-effectiveness model, statements such as 'further research is needed' are replaced with 'further research is cost-effective and 'further funding of a trial is justified'. ClinicalTrials.gov NCT01141894.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Bakker, Alexander M R; Wong, Tony E; Ruckert, Kelsey L; Keller, Klaus
2017-06-20
There is a growing awareness that uncertainties surrounding future sea-level projections may be much larger than typically perceived. Recently published projections appear widely divergent and highly sensitive to non-trivial model choices . Moreover, the West Antarctic ice sheet (WAIS) may be much less stable than previous believed, enabling a rapid disintegration. Here, we present a set of probabilistic sea-level projections that approximates the deeply uncertain WAIS contributions. The projections aim to inform robust decisions by clarifying the sensitivity to non-trivial or controversial assumptions. We show that the deeply uncertain WAIS contribution can dominate other uncertainties within decades. These deep uncertainties call for the development of robust adaptive strategies. These decision-making needs, in turn, require mission-oriented basic science, for example about potential signposts and the maximum rate of WAIS-induced sea-level changes.
Path planning in uncertain flow fields using ensemble method
NASA Astrophysics Data System (ADS)
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-10-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS
While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...
ESTIMATING UNCERTAINITIES IN FACTOR ANALYTIC MODELS
When interpreting results from factor analytic models as used in receptor modeling, it is important to quantify the uncertainties in those results. For example, if the presence of a species on one of the factors is necessary to interpret the factor as originating from a certain ...
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
NASA Astrophysics Data System (ADS)
Ahmadian, A.; Ismail, F.; Salahshour, S.; Baleanu, D.; Ghaemi, F.
2017-12-01
The analysis of the behaviors of physical phenomena is important to discover significant features of the character and the structure of mathematical models. Frequently the unknown parameters involve in the models are assumed to be unvarying over time. In reality, some of them are uncertain and implicitly depend on several factors. In this study, to consider such uncertainty in variables of the models, they are characterized based on the fuzzy notion. We propose here a new model based on fractional calculus to deal with the Kelvin-Voigt (KV) equation and non-Newtonian fluid behavior model with fuzzy parameters. A new and accurate numerical algorithm using a spectral tau technique based on the generalized fractional Legendre polynomials (GFLPs) is developed to solve those problems under uncertainty. Numerical simulations are carried out and the analysis of the results highlights the significant features of the new technique in comparison with the previous findings. A detailed error analysis is also carried out and discussed.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Application of a predictive Bayesian model to environmental accounting.
Anex, R P; Englehardt, J D
2001-03-30
Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
Understanding the Yellowstone magmatic system using 3D geodynamic inverse models
NASA Astrophysics Data System (ADS)
Kaus, B. J. P.; Reuber, G. S.; Popov, A.; Baumann, T.
2017-12-01
The Yellowstone magmatic system is one of the largest magmatic systems on Earth. Recent seismic tomography suggest that two distinct magma chambers exist: a shallow, presumably felsic chamber and a deeper much larger, partially molten, chamber above the Moho. Why melt stalls at different depth levels above the Yellowstone plume, whereas dikes cross-cut the whole lithosphere in the nearby Snake River Plane is unclear. Partly this is caused by our incomplete understanding of lithospheric scale melt ascent processes from the upper mantle to the shallow crust, which requires better constraints on the mechanics and material properties of the lithosphere.Here, we employ lithospheric-scale 2D and 3D geodynamic models adapted to Yellowstone to better understand magmatic processes in active arcs. The models have a number of (uncertain) input parameters such as the temperature and viscosity structure of the lithosphere, geometry and melt fraction of the magmatic system, while the melt content and rock densities are obtained by consistent thermodynamic modelling of whole rock data of the Yellowstone stratigraphy. As all of these parameters affect the dynamics of the lithosphere, we use the simulations to derive testable model predictions such as gravity anomalies, surface deformation rates and lithospheric stresses and compare them with observations. We incorporated it within an inversion method and perform 3D geodynamic inverse models of the Yellowstone magmatic system. An adjoint based method is used to derive the key model parameters and the factors that affect the stress field around the Yellowstone plume, locations of enhanced diking and melt accumulations. Results suggest that the plume and the magma chambers are connected with each other and that magma chamber overpressure is required to explain the surface displacement in phases of high activity above the Yellowstone magmatic system.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.
Trojan Horse cross section measurements and their impact on primordial nucleosynthesis
NASA Astrophysics Data System (ADS)
Pizzone, R. G.; Spartá, R.; Bertulani, C.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Mukhamedzhanov, A.; Tumino, A.
2018-01-01
Big Bang Nucleosynthesis (BBN) nucleosynthesis requires several nuclear physics inputs and, among them, an important role is played by nuclear reaction rates. They are among the most important input for a quantitative description of the early Universe. An up-to-date compilation of direct cross sections of d(d,p)t, d(d,n)3He and 3He(d,p)4He reactions is given, being these ones among the most uncertain bare-nucleus cross sections. An intense experimental effort has been carried on in the last decade to apply the Trojan Horse Method (THM) to study reactions of relevance for the BBN and measure their astrophysical S(E)-factor. The result of these recent measurements is reviewed and compared with the available direct data. The reaction rates and the relative error for the four reactions of interest are then numerically calculated in the temperature ranges of relevance for BBN (0.01
Molina-Navarro, Eugenio; Andersen, Hans E; Nielsen, Anders; Thodsen, Hans; Trolle, Dennis
2018-04-15
Water pollution and water scarcity are among the main environmental challenges faced by the European Union, and multiple stressors compromise the integrity of water resources and ecosystems. Particularly in lowland areas of northern Europe, high population density, flood protection and, especially, intensive agriculture, are important drivers of water quality degradation. In addition, future climate and land use changes may interact, with uncertain consequences for water resources. Modelling approaches have become essential to address water issues and to evaluate ecosystem management. In this work, three multi-stressor future storylines combining climatic and socio-economic changes, defined at European level, have been downscaled for the Odense Fjord catchment (Denmark), giving three scenarios: High-Tech agriculture (HT), Agriculture for Nature (AN) and Market-Driven agriculture (MD). The impacts of these scenarios on water discharge and inorganic and organic nutrient loads to the streams have been simulated using the Soil and Water Assessment Tool (SWAT). The results revealed that the scenario-specific climate inputs were most important when simulating hydrology, increasing river discharge in the HT and MD scenarios (which followed the high emission 8.5 representative concentration pathway, RCP), while remaining stable in the AN scenario (RCP 4.5). Moreover, discharge was the main driver of changes in organic nutrients and inorganic phosphorus loads that consequently increased in a high emission scenario. Nevertheless, both land use (via inputs of fertilizer) and climate changes affected the nitrate transport. Different levels of fertilization yielded a decrease in the nitrate load in AN and an increase in MD. In HT, however, nitrate losses remained stable because the fertilization decrease was counteracted by a flow increase. Thus, our results suggest that N loads will ultimately depend on future land use and management in an interaction with climate changes, and this knowledge is of utmost importance for the achievement of European environmental policy goals. Copyright © 2017 Elsevier B.V. All rights reserved.
Improving the Effect and Efficiency of FMD Control by Enlarging Protection or Surveillance Zones
Halasa, Tariq; Toft, Nils; Boklund, Anette
2015-01-01
An epidemic of foot-and-mouth disease (FMD) in a FMD-free country with large exports of livestock and livestock products would result in profound economic damage. This could be reduced by rapid and efficient control of the disease spread. The objectives of this study were to estimate the economic impact of a hypothetical FMD outbreak in Denmark based on changes to the economic assumptions of the model, and to investigate whether the control of an FMD epidemic can be improved by combining the enlargement of protection or surveillance zones with pre-emptive depopulation or emergency vaccination. The stochastic spatial simulation model DTU-DADS was used to simulate the spread of FMD in Denmark. The control strategies were the basic EU and Danish strategy, pre-emptive depopulation, suppressive or protective vaccination, enlarging protection or surveillance zones, and a combination of pre-emptive depopulation or emergency vaccination with enlarged protection or surveillance zones. Herds are detected either based on basic detection through the appearance of clinical signs, or as a result of surveillance in the control zones. The economic analyses consisted of direct costs and export losses. Sensitivity analysis was performed on uncertain and potentially influential input parameters. Enlarging the surveillance zones from 10 to 15 km, combined with pre-emptive depopulation over a 1-km radius around detected herds resulted in the lowest total costs. This was still the case even when the different input parameters were changed in the sensitivity analysis. Changing the resources for clinical surveillance did not affect the epidemic consequences. In conclusion, an FMD epidemic in Denmark would have a larger economic impact on the agricultural sector than previously anticipated. Furthermore, the control of a potential FMD outbreak in Denmark may be improved by combining pre-emptive depopulation with an enlarged protection or surveillance zone. PMID:26664996
Improving the Effect and Efficiency of FMD Control by Enlarging Protection or Surveillance Zones.
Halasa, Tariq; Toft, Nils; Boklund, Anette
2015-01-01
An epidemic of foot-and-mouth disease (FMD) in a FMD-free country with large exports of livestock and livestock products would result in profound economic damage. This could be reduced by rapid and efficient control of the disease spread. The objectives of this study were to estimate the economic impact of a hypothetical FMD outbreak in Denmark based on changes to the economic assumptions of the model, and to investigate whether the control of an FMD epidemic can be improved by combining the enlargement of protection or surveillance zones with pre-emptive depopulation or emergency vaccination. The stochastic spatial simulation model DTU-DADS was used to simulate the spread of FMD in Denmark. The control strategies were the basic EU and Danish strategy, pre-emptive depopulation, suppressive or protective vaccination, enlarging protection or surveillance zones, and a combination of pre-emptive depopulation or emergency vaccination with enlarged protection or surveillance zones. Herds are detected either based on basic detection through the appearance of clinical signs, or as a result of surveillance in the control zones. The economic analyses consisted of direct costs and export losses. Sensitivity analysis was performed on uncertain and potentially influential input parameters. Enlarging the surveillance zones from 10 to 15 km, combined with pre-emptive depopulation over a 1-km radius around detected herds resulted in the lowest total costs. This was still the case even when the different input parameters were changed in the sensitivity analysis. Changing the resources for clinical surveillance did not affect the epidemic consequences. In conclusion, an FMD epidemic in Denmark would have a larger economic impact on the agricultural sector than previously anticipated. Furthermore, the control of a potential FMD outbreak in Denmark may be improved by combining pre-emptive depopulation with an enlarged protection or surveillance zone.