Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Robust root clustering for linear uncertain systems using generalized Lyapunov theory
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1993-01-01
Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1990-01-01
A game theoretic controller is developed for a linear time-invariant system with parameter uncertainties in system and input matrices. The input-output decomposition modeling for the plant uncertainty is adopted. The uncertain dynamic system is represented as an internal feedback loop in which the system is assumed forced by fictitious disturbance caused by the parameter uncertainty. By considering the input and the fictitious disturbance as two noncooperative players, a differential game problem is constructed. It is shown that the resulting time invariant controller stabilizes the uncertain system for a prescribed uncertainty bound. This game theoretic controller is applied to the momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Inclusion of the external disturbance torque to the design procedure results in a dynamical feedback controller which consists of conventional PID control and cyclic disturbance rejection filter. It is shown that the game theoretic design, comparing to the LQR design or pole placement design, improves the stability robustness with respect to inertia variations.
Time-delayed chameleon: Analysis, synchronization and FPGA implementation
NASA Astrophysics Data System (ADS)
Rajagopal, Karthikeyan; Jafari, Sajad; Laarem, Guessas
2017-12-01
In this paper we report a time-delayed chameleon-like chaotic system which can belong to different families of chaotic attractors depending on the choices of parameters. Such a characteristic of self-excited and hidden chaotic flows in a simple 3D system with time delay has not been reported earlier. Dynamic analysis of the proposed time-delayed systems are analysed in time-delay space and parameter space. A novel adaptive modified functional projective lag synchronization algorithm is derived for synchronizing identical time-delayed chameleon systems with uncertain parameters. The proposed time-delayed systems and the synchronization algorithm with controllers and parameter estimates are then implemented in FPGA using hardware-software co-simulation and the results are presented.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Cluster synchronization transmission of different external signals in discrete uncertain network
NASA Astrophysics Data System (ADS)
Li, Chengren; Lü, Ling; Chen, Liansong; Hong, Yixuan; Zhou, Shuang; Yang, Yiming
2018-07-01
We research cluster synchronization transmissions of different external signals in discrete uncertain network. Based on the Lyapunov theorem, the network controller and the identification law of uncertain adjustment parameter are designed, and they are efficiently used to achieve the cluster synchronization and the identification of uncertain adjustment parameter. In our technical scheme, the network nodes in each cluster and the transmitted external signal can be different, and they allow the presence of uncertain parameters in the network. Especially, we are free to choose the clustering topologies, the cluster number and the node number in each cluster.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.
Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu
2015-01-01
This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Robust control synthesis for uncertain dynamical systems
NASA Technical Reports Server (NTRS)
Byun, Kuk-Whan; Wie, Bong; Sunkel, John
1989-01-01
This paper presents robust control synthesis techniques for uncertain dynamical systems subject to structured parameter perturbation. Both QFT (quantitative feedback theory) and H-infinity control synthesis techniques are investigated. Although most H-infinity-related control techniques are not concerned with the structured parameter perturbation, a new way of incorporating the parameter uncertainty in the robust H-infinity control design is presented. A generic model of uncertain dynamical systems is used to illustrate the design methodologies investigated in this paper. It is shown that, for a certain noncolocated structural control problem, use of both techniques results in nonminimum phase compensation.
Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity
NASA Astrophysics Data System (ADS)
Li, Dunzhu; Gurnis, Michael; Stadler, Georg
2017-04-01
We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
An imprecise probability approach for squeal instability analysis based on evidence theory
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-01-01
An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.
Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty
NASA Astrophysics Data System (ADS)
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
NASA Astrophysics Data System (ADS)
Akita, T.; Takaki, R.; Shima, E.
2012-04-01
An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Reduced-order dynamic output feedback control of uncertain discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Morais, Cecília F.; Braga, Márcio F.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2017-11-01
This paper deals with the problem of designing reduced-order robust dynamic output feedback controllers for discrete-time Markov jump linear systems (MJLS) with polytopic state space matrices and uncertain transition probabilities. Starting from a full order, mode-dependent and polynomially parameter-dependent dynamic output feedback controller, sufficient linear matrix inequality based conditions are provided for the existence of a robust reduced-order dynamic output feedback stabilising controller with complete, partial or none mode dependency assuring an upper bound to the ? or the ? norm of the closed-loop system. The main advantage of the proposed method when compared to the existing approaches is the fact that the dynamic controllers are exclusively expressed in terms of the decision variables of the problem. In other words, the matrices that define the controller realisation do not depend explicitly on the state space matrices associated with the modes of the MJLS. As a consequence, the method is specially suitable to handle order reduction or cluster availability constraints in the context of ? or ? dynamic output feedback control of discrete-time MJLS. Additionally, as illustrated by means of numerical examples, the proposed approach can provide less conservative results than other conditions in the literature.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia
NASA Astrophysics Data System (ADS)
Mather, B.; Moresi, L. N.; Rayner, P. J.
2017-12-01
The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.
Robust on-off pulse control of flexible space vehicles
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi
1993-01-01
The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Synchronization transmission of laser pattern signal within uncertain switched network
NASA Astrophysics Data System (ADS)
Lü, Ling; Li, Chengren; Li, Gang; Sun, Ao; Yan, Zhe; Rong, Tingting; Gao, Yan
2017-06-01
We propose a new technology for synchronization transmission of laser pattern signal within uncertain network with controllable topology. In synchronization process, the connection of dynamic network can vary at all time according to different demands. Especially, we construct the Lyapunov function of network through designing a special semi-positive definite function, and the synchronization transmission of laser pattern signal within uncertain network with controllable topology can be realized perfectly, which effectively avoids the complicated calculation for solving the second largest eignvalue of the coupling matrix of the dynamic network in order to obtain the network synchronization condition. At the same time, the uncertain parameters in dynamic equations belonging to network nodes can also be identified accurately via designing the identification laws of uncertain parameters. In addition, there are not any limitations for the synchronization target of network in the new technology, in other words, the target can either be a state variable signal of an arbitrary node within the network or an exterior signal.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zheng, Mingwen; Li, Shudong; Wang, Weiping
2018-03-01
Some existing papers focused on finite-time parameter identification and synchronization, but provided incomplete theoretical analyses. Such works incorporated conflicting constraints for parameter identification, therefore, the practical significance could not be fully demonstrated. To overcome such limitations, the underlying paper presents new results of parameter identification and synchronization for uncertain complex dynamical networks with impulsive effect and stochastic perturbation based on finite-time stability theory. Novel results of parameter identification and synchronization control criteria are obtained in a finite time by utilizing Lyapunov function and linear matrix inequality respectively. Finally, numerical examples are presented to illustrate the effectiveness of our theoretical results.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
NASA Astrophysics Data System (ADS)
Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza
2017-09-01
This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
A Verification-Driven Approach to Control Analysis and Tuning
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2008-01-01
This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Compensatory parameters of intracranial space in giant hydrocephalus.
Cieślicki, Krzysztof; Czepko, Ryszard
2009-01-01
The main goal of the present study is to examine compensatory parameters of intracranial space in giant hydrocephalus. We also assess the early and late outcome and analyse complications in shunted cases. Nine cases of giant hydrocephalus characterised by the value of Evans ratio > 0.5, ventricular index > 1.5, and the width of the third ventricle > 20 mm were considered. Using the lumbar infusion test and developed software we analysed the intracranial compensatory parameters typical for hydrocephalus. Based on the Marmarou model, the method depended on a repeated search for the best fitting curve corresponding to the progress of the test was used. Eight out of nine patients were therefore shunted. Patients were followed up for 9 months. Five out of eight shunted patients undoubtedly improved in a few days after surgery (62%). Complications (subdural hygromas/haematomas and intracerebral haematoma) developed in 5 (62%) cases in longer follow-up. A definite improvement was noted in 4 out of 8 operated cases (50%). To get the stable values of compensatory parameters, the duration of the infusion test must at least double the inflexion time of the test curve. All but one considered cases of giant hydrocephalus were characterized by lack of intracranial space reserve, significantly reduced rate of CSF secretion and by various degrees of elevated value of the resistance to outflow. Due to the significant number of complications and uncertain long-term improvement, great caution in decision making for shunting has to be taken.
A robust momentum management and attitude control system for the space station
NASA Technical Reports Server (NTRS)
Speyer, J. L.; Rhee, Ihnseok
1991-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
Robust momentum management and attitude control system for the Space Station
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1992-01-01
A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The aspect of controller design for improving the ride quality of aircraft in terms of damping ratio and natural frequency specifications on the short period dynamics is addressed. The controller is designed to be robust with respect to uncertainties in the real parameters of the control design model such as uncertainties in the dimensional stability derivatives, imperfections in actuator/sensor locations and possibly variations in flight conditions, etc. The design is based on a new robust root clustering theory developed by the author by extending the nominal root clustering theory of Gutman and Jury to perturbed matrices. The proposed methodology allows to get an explicit relationship between the parameters of the root clustering region and the uncertainty radius of the parameter space. The current literature available for robust stability becomes a special case of this unified theory. The bounds derived on the parameter perturbation for robust root clustering are then used in selecting the robust controller.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yanlong
2017-09-01
Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.
An online spatiotemporal prediction model for dengue fever epidemic in Kaohsiung (Taiwan).
Yu, Hwa-Lung; Angulo, José M; Cheng, Ming-Hung; Wu, Jiaping; Christakos, George
2014-05-01
The emergence and re-emergence of disease epidemics is a complex question that may be influenced by diverse factors, including the space-time dynamics of human populations, environmental conditions, and associated uncertainties. This study proposes a stochastic framework to integrate space-time dynamics in the form of a Susceptible-Infected-Recovered (SIR) model, together with uncertain disease observations, into a Bayesian maximum entropy (BME) framework. The resulting model (BME-SIR) can be used to predict space-time disease spread. Specifically, it was applied to obtain a space-time prediction of the dengue fever (DF) epidemic that took place in Kaohsiung City (Taiwan) during 2002. In implementing the model, the SIR parameters were continually updated and information on new cases of infection was incorporated. The results obtained show that the proposed model is rigorous to user-specified initial values of unknown model parameters, that is, transmission and recovery rates. In general, this model provides a good characterization of the spatial diffusion of the DF epidemic, especially in the city districts proximal to the location of the outbreak. Prediction performance may be affected by various factors, such as virus serotypes and human intervention, which can change the space-time dynamics of disease diffusion. The proposed BME-SIR disease prediction model can provide government agencies with a valuable reference for the timely identification, control, and prevention of DF spread in space and time. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Internationalization for an Uncertain Future: Tensions, Paradoxes, and Possibilities
ERIC Educational Resources Information Center
Stein, Sharon
2017-01-01
As higher education is increasingly called upon to play a central role in addressing the challenges and crises of today's complex, uncertain, and volatile world, internationalization efforts are intensifying. Emphasizing higher education as a space for critically-informed, socially accountable, and open-ended conversations about alternative…
NASA Astrophysics Data System (ADS)
Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina
2016-04-01
The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.
Global sensitivity analysis of multiscale properties of porous materials
NASA Astrophysics Data System (ADS)
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
Switching State-Feedback LPV Control with Uncertain Scheduling Parameters
NASA Technical Reports Server (NTRS)
He, Tianyi; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming G.
2017-01-01
This paper presents a new method to design Robust Switching State-Feedback Gain-Scheduling (RSSFGS) controllers for Linear Parameter Varying (LPV) systems with uncertain scheduling parameters. The domain of scheduling parameters are divided into several overlapped subregions to undergo hysteresis switching among a family of simultaneously designed LPV controllers over the corresponding subregion with the guaranteed H-infinity performance. The synthesis conditions are given in terms of Parameterized Linear Matrix Inequalities that guarantee both stability and performance at each subregion and associated switching surfaces. The switching stability is ensured by descent parameter-dependent Lyapunov function on switching surfaces. By solving the optimization problem, RSSFGS controller can be obtained for each subregion. A numerical example is given to illustrate the effectiveness of the proposed approach over the non-switching controllers.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
NASA Instep/mdmsc Jitter Suppression Experiment (JITTER)
NASA Technical Reports Server (NTRS)
White, Edward V.
1992-01-01
The objectives are the following: (1) to develop and demonstrate in-space performance of both passive and active damping systems for suppression of micro-amplitude vibration on an actual application structure and operate despite uncertain dynamics and uncertain disturbance characteristics; and (2) to correlate ground and in-space performance - the performance metric is vibration attenuation. The goals are to achieve vibration suppression equivalent to 5 percent passive damping in selected models and 15 percent active damping in selected modes. Various aspects of this experiment are presented in viewgraph form.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models
NASA Astrophysics Data System (ADS)
Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas
2017-02-01
A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.
Charting the parameter space of the global 21-cm signal
NASA Astrophysics Data System (ADS)
Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan
2017-12-01
The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.
Finite-time master-slave synchronization and parameter identification for uncertain Lurie systems.
Wang, Tianbo; Zhao, Shouwei; Zhou, Wuneng; Yu, Weiqin
2014-07-01
This paper investigates the finite-time master-slave synchronization and parameter identification problem for uncertain Lurie systems based on the finite-time stability theory and the adaptive control method. The finite-time master-slave synchronization means that the state of a slave system follows with that of a master system in finite time, which is more reasonable than the asymptotical synchronization in applications. The uncertainties include the unknown parameters and noise disturbances. An adaptive controller and update laws which ensures the synchronization and parameter identification to be realized in finite time are constructed. Finally, two numerical examples are given to show the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Synchronization between uncertain nonidentical networks with quantum chaotic behavior
NASA Astrophysics Data System (ADS)
Li, Wenlin; Li, Chong; Song, Heshan
2016-11-01
Synchronization between uncertain nonidentical networks with quantum chaotic behavior is researched. The identification laws of unknown parameters in state equations of network nodes, the adaptive laws of configuration matrix elements and outer coupling strengths are determined based on Lyapunov theorem. The conditions of realizing synchronization between uncertain nonidentical networks are discussed and obtained. Further, Jaynes-Cummings model in physics are taken as the nodes of two networks and simulation results show that the synchronization performance between networks is very stable.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1985-01-01
Robustness properties are investigated for two types of controllers for large flexible space structures, which use collocated sensors and actuators. The first type is an attitude controller which uses negative definite feedback of measured attitude and rate, while the second type is a damping enhancement controller which uses only velocity (rate) feedback. It is proved that collocated attitude controllers preserve closed loop global asymptotic stability when linear actuator/sensor dynamics satisfying certain phase conditions are present, or monotonic increasing nonlinearities are present. For velocity feedback controllers, the global asymptotic stability is proved under much weaker conditions. In particular, they have 90 phase margin and can tolerate nonlinearities belonging to the (0,infinity) sector in the actuator/sensor characteristics. The results significantly enhance the viability of both types of collocated controllers, especially when the available information about the large space structure (LSS) parameters is inadequate or inaccurate.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol
2008-01-01
This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
NASA Astrophysics Data System (ADS)
Wang, Jian; Zhang, Xiangming; Wang, Ping; Wang, Xiang; Farris, Alton B.; Wang, Ya
2016-06-01
Unlike terrestrial ionizing radiation, space radiation, especially galactic cosmic rays (GCR), contains high energy charged (HZE) particles with high linear energy transfer (LET). Due to a lack of epidemiologic data for high-LET radiation exposure, it is highly uncertain how high the carcinogenesis risk is for astronauts following exposure to space radiation during space missions. Therefore, using mouse models is necessary to evaluate the risk of space radiation-induced tumorigenesis; however, which mouse model is better for these studies remains uncertain. Since lung tumorigenesis is the leading cause of cancer death among both men and women, and low-LET radiation exposure increases human lung carcinogenesis, evaluating space radiation-induced lung tumorigenesis is critical to enable safe Mars missions. Here, by comparing lung tumorigenesis obtained from different mouse strains, as well as miR-21 in lung tissue/tumors and serum, we believe that wild type mice with a low spontaneous tumorigenesis background are ideal for evaluating the risk of space radiation-induced lung tumorigenesis, and circulating miR-21 from such mice model might be used as a biomarker for predicting the risk.
Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles
NASA Astrophysics Data System (ADS)
Wilcox, Zachary Donald
The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
NASA Technical Reports Server (NTRS)
Yunis, Isam S.; Carney, Kelly S.
1993-01-01
A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Wang, Jian; Zhang, Xiangming; Wang, Ping; Wang, Xiang; Farris, Alton B; Wang, Ya
2016-06-01
Unlike terrestrial ionizing radiation, space radiation, especially galactic cosmic rays (GCR), contains high energy charged (HZE) particles with high linear energy transfer (LET). Due to a lack of epidemiologic data for high-LET radiation exposure, it is highly uncertain how high the carcinogenesis risk is for astronauts following exposure to space radiation during space missions. Therefore, using mouse models is necessary to evaluate the risk of space radiation-induced tumorigenesis; however, which mouse model is better for these studies remains uncertain. Since lung tumorigenesis is the leading cause of cancer death among both men and women, and low-LET radiation exposure increases human lung carcinogenesis, evaluating space radiation-induced lung tumorigenesis is critical to enable safe Mars missions. Here, by comparing lung tumorigenesis obtained from different mouse strains, as well as miR-21 in lung tissue/tumors and serum, we believe that wild type mice with a low spontaneous tumorigenesis background are ideal for evaluating the risk of space radiation-induced lung tumorigenesis, and circulating miR-21 from such mice model might be used as a biomarker for predicting the risk. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system
NASA Astrophysics Data System (ADS)
Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping
2017-12-01
This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.
On the robust optimization to the uncertain vaccination strategy problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less
Biehler, J; Wall, W A
2018-02-01
If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.
Herrero, David; Martínez, Humberto
2011-01-01
This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Efficient Portfolios of the Energy Technologies
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2011-09-01
The goal of the research is to apply the methods of Portfolio Theory to a set of technologies instead of to a set of securities on a stock market (as it is the case in the original model). Assets on the stock market are objects that have risk and return, parameters that depend on uncertain factors and thus are uncertain. The returns from the use of technologies also depend on uncertain factors and thus each technology has a certain amount of risk. The simultaneous use of technologies could diversify the risks that are associated with technologies just the same way as diversification works on the stock market.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
NASA Astrophysics Data System (ADS)
Ataei-Esfahani, Armin
In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.
Development and Evaluation of Fault-Tolerant Flight Control Systems
NASA Technical Reports Server (NTRS)
Song, Yong D.; Gupta, Kajal (Technical Monitor)
2004-01-01
The research is concerned with developing a new approach to enhancing fault tolerance of flight control systems. The original motivation for fault-tolerant control comes from the need for safe operation of control elements (e.g. actuators) in the event of hardware failures in high reliability systems. One such example is modem space vehicle subjected to actuator/sensor impairments. A major task in flight control is to revise the control policy to balance impairment detectability and to achieve sufficient robustness. This involves careful selection of types and parameters of the controllers and the impairment detecting filters used. It also involves a decision, upon the identification of some failures, on whether and how a control reconfiguration should take place in order to maintain a certain system performance level. In this project new flight dynamic model under uncertain flight conditions is considered, in which the effects of both ramp and jump faults are reflected. Stabilization algorithms based on neural network and adaptive method are derived. The control algorithms are shown to be effective in dealing with uncertain dynamics due to external disturbances and unpredictable faults. The overall strategy is easy to set up and the computation involved is much less as compared with other strategies. Computer simulation software is developed. A serious of simulation studies have been conducted with varying flight conditions.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Adaptive identifier for uncertain complex nonlinear systems based on continuous neural networks.
Alfaro-Ponce, Mariel; Cruz, Amadeo Argüelles; Chairez, Isaac
2014-03-01
This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Orbit control of a stratospheric satellite with parameter uncertainties
NASA Astrophysics Data System (ADS)
Xu, Ming; Huo, Wei
2016-12-01
When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
Metropolitan open-space protection with uncertain site availability
Robert G. Haight; Stephanie A. Snyder; Charles S. Revelle
2005-01-01
Urban planners acquire open space to protect natural areas and provide public access to recreation opportunities. Because of limited budgets and dynamic land markets, acquisitions take place sequentially depending on available funds and sites. To address these planning features, we formulated a two-period site selection model with two objectives: maximize the...
Robust Task Space Trajectory Tracking Control of Robotic Manipulators
NASA Astrophysics Data System (ADS)
Galicki, M.
2016-08-01
This work deals with the problem of the accurate task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the end-effector. Furthermore, the movement is to be accomplished in such a way as to reduce both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we propose a class of chattering-free robust controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.
Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.
Zheng, Song
2015-09-01
In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessel (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Sometimes lifetime testing is performed on an actual COPV in service in an effort to validate the reliability model that is the basis for certifying the continued flight worthiness of its sisters. Currently, testing of such a Kevlar49(registered TradeMark)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the data base and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one nine , that is, reducing the probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several would be necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2013-05-01
Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
NASA Technical Reports Server (NTRS)
Kinne, S.; Wiscombe, Warren; Einaudi, Franco (Technical Monitor)
2001-01-01
Understanding the effect of aerosol on cloud systems is one of the major challenges in atmospheric and climate research. Local studies suggest a multitude of influences on cloud properties. Yet the overall effect on cloud albedo, a critical parameter in climate simulations, remains uncertain. NASA's Triana mission will provide, from its EPIC multi-spectral imager, simultaneous data on aerosol properties and cloud reflectivity. With Triana's unique position in space these data will be available not only globally but also over the entire daytime, well suited to accommodate the often short lifetimes of aerosol and investigations around diurnal cycles. This pilot study explores the ability to detect relationships between aerosol properties and cloud reflectivity with sophisticated statistical methods. Sample results using data from the EOS Terra platform to simulate Triana are presented.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Robust linear quadratic designs with respect to parameter uncertainty
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1992-01-01
The authors derive a linear quadratic regulator (LQR) which is robust to parametric uncertainty by using the overbounding method of I. R. Petersen and C. V. Hollot (1986). The resulting controller is determined from the solution of a single modified Riccati equation. It is shown that, when applied to a structural system, the controller gains add robustness by minimizing the potential energy of uncertain stiffness elements, and minimizing the rate of dissipation of energy through uncertain damping elements. A worst-case disturbance in the direction of the uncertainty is also considered. It is proved that performance robustness has been increased with the robust LQR when compared to a mismatched LQR design where the controller is designed on the nominal system, but applied to the actual uncertain system.
Adaptive suboptimal second-order sliding mode control for microgrids
NASA Astrophysics Data System (ADS)
Incremona, Gian Paolo; Cucuzzella, Michele; Ferrara, Antonella
2016-09-01
This paper deals with the design of adaptive suboptimal second-order sliding mode (ASSOSM) control laws for grid-connected microgrids. Due to the presence of the inverter, of unpredicted load changes, of switching among different renewable energy sources, and of electrical parameters variations, the microgrid model is usually affected by uncertain terms which are bounded, but with unknown upper bounds. To theoretically frame the control problem, the class of second-order systems in Brunovsky canonical form, characterised by the presence of matched uncertain terms with unknown bounds, is first considered. Four adaptive strategies are designed, analysed and compared to select the most effective ones to be applied to the microgrid case study. In the first two strategies, the control amplitude is continuously adjusted, so as to arrive at dominating the effect of the uncertainty on the controlled system. When a suitable control amplitude is attained, the origin of the state space of the auxiliary system becomes attractive. In the other two strategies, a suitable blend between two components, one mainly working during the reaching phase, the other being the predominant one in a vicinity of the sliding manifold, is generated, so as to reduce the control amplitude in steady state. The microgrid system in a grid-connected operation mode, controlled via the selected ASSOSM control strategies, exhibits appreciable stability properties, as proved theoretically and shown in simulation.
NASA Astrophysics Data System (ADS)
Shishebori, Davood; Babadi, Abolghasem Yousefi
2018-03-01
This study investigates the reliable multi-configuration capacitated logistics network design problem (RMCLNDP) under system disturbances, which relates to locating facilities, establishing transportation links, and also allocating their limited capacities to the customers conducive to provide their demand on the minimum expected total cost (including locating costs, link constructing costs, and also expected costs in normal and disturbance conditions). In addition, two types of risks are considered; (I) uncertain environment, (II) system disturbances. A two-level mathematical model is proposed for formulating of the mentioned problem. Also, because of the uncertain parameters of the model, an efficacious possibilistic robust optimization approach is utilized. To evaluate the model, a drug supply chain design (SCN) is studied. Finally, an extensive sensitivity analysis was done on the critical parameters. The obtained results show that the efficiency of the proposed approach is suitable and is worthwhile for analyzing the real practical problems.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Qingling; Ren, Junchao; Zhang, Yanhao
2017-10-01
This paper studies the problem of robust stability and stabilisation for uncertain large-scale interconnected nonlinear descriptor systems via proportional plus derivative state feedback or proportional plus derivative output feedback. The basic idea of this work is to use the well-known differential mean value theorem to deal with the nonlinear model such that the considered nonlinear descriptor systems can be transformed into linear parameter varying systems. By using a parameter-dependent Lyapunov function, a decentralised proportional plus derivative state feedback controller and decentralised proportional plus derivative output feedback controller are designed, respectively such that the closed-loop system is quadratically normal and quadratically stable. Finally, a hypersonic vehicle practical simulation example and numerical example are given to illustrate the effectiveness of the results obtained in this paper.
Robust passivity analysis for discrete-time recurrent neural networks with mixed delays
NASA Astrophysics Data System (ADS)
Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu
2015-02-01
This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.
Geo-Statistical Approach to Estimating Asteroid Exploration Parameters
NASA Technical Reports Server (NTRS)
Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles
2011-01-01
NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.
NASA Astrophysics Data System (ADS)
2007-07-01
WE RECOMMEND God: The Failed Hypothesis A book that applies scientific logic to the search for a creator Go with the Flow This CD-ROM proves a great resource for teaching fluids Collins GCSE Student Book for EdExcel 360 Additional Science An attractive update that will sit well in modern classrooms The Rough Guide to Climate Change This book contains a thorough study of the must-teach subject InspireData Presentation software ideal for analysing data in the field WORTH A LOOK Uncertain Science...Uncertain World A book to persuade the public that unanswered questions are not a failure of science or scientists Fisher Space Pen An interesting teaching resource and a nice bit of stationery HANDLE WITH CARE Russian Space Pen A joke gift at best—no physics here IGCSE Physics for EdExcel Dull, old-fashioned approach to teaching the new qualification WEB WATCH How news headlines can prove a valuable tool to get pupils interested in a subject
Kinematically Optimal Robust Control of Redundant Manipulators
NASA Astrophysics Data System (ADS)
Galicki, M.
2017-12-01
This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.
Benefits estimates of highway capital improvements with uncertain parameters.
DOT National Transportation Integrated Search
2006-01-01
This report warrants consideration in the development of goals, performance measures, and standard cost-benefit methodology required of transportation agencies by the Virginia 2006 Appropriations Act. The Virginia Department of Transportation has beg...
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.
Encoding dependence in Bayesian causal networks
USDA-ARS?s Scientific Manuscript database
Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.
2014-10-01
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Modeling transport phenomena and uncertainty quantification in solidification processes
NASA Astrophysics Data System (ADS)
Fezi, Kyle S.
Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.
Polynomial chaos expansion with random and fuzzy variables
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.
2016-06-01
A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.
Uncertainty Quantification in Aeroelasticity
NASA Astrophysics Data System (ADS)
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Hydraulic fracture propagation modeling and data-based fracture identification
NASA Astrophysics Data System (ADS)
Zhou, Jing
Successful shale gas and tight oil production is enabled by the engineering innovation of horizontal drilling and hydraulic fracturing. Hydraulically induced fractures will most likely deviate from the bi-wing planar pattern and generate complex fracture networks due to mechanical interactions and reservoir heterogeneity, both of which render the conventional fracture simulators insufficient to characterize the fractured reservoir. Moreover, in reservoirs with ultra-low permeability, the natural fractures are widely distributed, which will result in hydraulic fractures branching and merging at the interface and consequently lead to the creation of more complex fracture networks. Thus, developing a reliable hydraulic fracturing simulator, including both mechanical interaction and fluid flow, is critical in maximizing hydrocarbon recovery and optimizing fracture/well design and completion strategy in multistage horizontal wells. A novel fully coupled reservoir flow and geomechanics model based on the dual-lattice system is developed to simulate multiple nonplanar fractures' propagation in both homogeneous and heterogeneous reservoirs with or without pre-existing natural fractures. Initiation, growth, and coalescence of the microcracks will lead to the generation of macroscopic fractures, which is explicitly mimicked by failure and removal of bonds between particles from the discrete element network. This physics-based modeling approach leads to realistic fracture patterns without using the empirical rock failure and fracture propagation criteria required in conventional continuum methods. Based on this model, a sensitivity study is performed to investigate the effects of perforation spacing, in-situ stress anisotropy, rock properties (Young's modulus, Poisson's ratio, and compressive strength), fluid properties, and natural fracture properties on hydraulic fracture propagation. In addition, since reservoirs are buried thousands of feet below the surface, the parameters used in the reservoir flow simulator have large uncertainty. Those biased and uncertain parameters will result in misleading oil and gas recovery predictions. The Ensemble Kalman Filter is used to estimate and update both the state variables (pressure and saturations) and uncertain reservoir parameters (permeability). In order to directly incorporate spatial information such as fracture location and formation heterogeneity into the algorithm, a new covariance matrix method is proposed. This new method has been applied to a simplified single-phase reservoir and a complex black oil reservoir with complex structures to prove its capability in calibrating the reservoir parameters.
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Leveraging modern climatology to increase adaptive capacity across protected area networks
Davison, J.E.; Graumlich, L.J.; Rowland, E.L.; Pederson, G.T.; Breshears, D.D.
2012-01-01
Human-driven changes in the global environment pose an increasingly urgent challenge for the management of ecosystems that is made all the more difficult by the uncertain future of both environmental conditions and ecological responses. Land managers need strategies to increase regional adaptive capacity, but relevant and rapid assessment approaches are lacking. To address this need, we developed a method to assess regional protected area networks across biophysically important climatic gradients often linked to biodiversity and ecosystem function. We plot the land of the southwestern United States across axes of historical climate space, and identify landscapes that may serve as strategic additions to current protected area portfolios. Considering climate space is straightforward, and it can be applied using a variety of relevant climate parameters across differing levels of land protection status. The resulting maps identify lands that are climatically distinct from existing protected areas, and may be utilized in combination with other ecological and socio-economic information essential to collaborative landscape-scale decision-making. Alongside other strategies intended to protect species of special concern, natural resources, and other ecosystem services, the methods presented herein provide another important hedging strategy intended to increase the adaptive capacity of protected area networks. ?? 2011 Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Prakash, OM, II
1991-01-01
Three linear controllers are desiged to regulate the end effector of the Space Shuttle Remote Manipulator System (SRMS) operating in Position Hold Mode. In this mode of operation, jet firings of the Orbiter can be treated as disturbances while the controller tries to keep the end effector stationary in an orbiter-fixed reference frame. The three design techniques used include: the Linear Quadratic Regulator (LQR), H2 optimization, and H-infinity optimization. The nonlinear SRMS is linearized by modelling the effects of the significant nonlinearities as uncertain parameters. Each regulator design is evaluated for robust stability in light of the parametric uncertanties using both the small gain theorem with an H-infinity norm and the less conservative micro-analysis test. All three regulator designs offer significant improvement over the current system on the nominal plant. Unfortunately, even after dropping performance requirements and designing exclusively for robust stability, robust stability cannot be achieved. The SRMS suffers from lightly damped poles with real parametric uncertainties. Such a system renders the micro-analysis test, which allows for complex peturbations, too conservative.
Forecasting seasonal influenza with a state-space SIR model.
Osthus, Dave; Hickmann, Kyle S; Caragea, Petruţa C; Higdon, Dave; Del Valle, Sara Y
2017-03-01
Seasonal influenza is a serious public health and societal problem due to its consequences resulting from absenteeism, hospitalizations, and deaths. The overall burden of influenza is captured by the Centers for Disease Control and Prevention's influenza-like illness network, which provides invaluable information about the current incidence. This information is used to provide decision support regarding prevention and response efforts. Despite the relatively rich surveillance data and the recurrent nature of seasonal influenza, forecasting the timing and intensity of seasonal influenza in the U.S. remains challenging because the form of the disease transmission process is uncertain, the disease dynamics are only partially observed, and the public health observations are noisy. Fitting a probabilistic state-space model motivated by a deterministic mathematical model [a susceptible-infectious-recovered (SIR) model] is a promising approach for forecasting seasonal influenza while simultaneously accounting for multiple sources of uncertainty. A significant finding of this work is the importance of thoughtfully specifying the prior, as results critically depend on its specification. Our conditionally specified prior allows us to exploit known relationships between latent SIR initial conditions and parameters and functions of surveillance data. We demonstrate advantages of our approach relative to alternatives via a forecasting comparison using several forecast accuracy metrics.
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Xu, Shidong; Sun, Guanghui; Sun, Weichao
2017-01-01
In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Khazaee, Mostafa; Markazi, Amir H D; Omidi, Ehsan
2015-11-01
In this paper, a new Adaptive Fuzzy Predictive Sliding Mode Control (AFP-SMC) is presented for nonlinear systems with uncertain dynamics and unknown input delay. The control unit consists of a fuzzy inference system to approximate the ideal linearization control, together with a switching strategy to compensate for the estimation errors. Also, an adaptive fuzzy predictor is used to estimate the future values of the system states to compensate for the time delay. The adaptation laws are used to tune the controller and predictor parameters, which guarantee the stability based on a Lyapunov-Krasovskii functional. To evaluate the method effectiveness, the simulation and experiment on an overhead crane system are presented. According to the obtained results, AFP-SMC can effectively control the uncertain nonlinear systems, subject to input delays of known bound. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Sensitivity studies for a space-based methane lidar mission
NASA Astrophysics Data System (ADS)
Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.
2011-10-01
Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1% over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin ice clouds will be minimised.
Sensitivity studies for a space-based methane lidar mission
NASA Astrophysics Data System (ADS)
Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.
2011-06-01
Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in Polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1 % over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin ice clouds will be minimised.
NASA Technical Reports Server (NTRS)
1975-01-01
High purity tungsten, which is used for targets in X-ray tubes was considered for space processing. The demand for X-ray tubes was calculated using the growth rates for dental and medical X-ray machines. It is concluded that the cost benefits are uncertain.
Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Rodney O.; Passalacqua, Alberto
2016-02-01
Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less
Cox, Louis Anthony Tony
2006-12-01
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
NASA Langley's Approach to the Sandia's Structural Dynamics Challenge Problem
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kenny, Sean P.; Crespo, Luis G.; Elliott, Kenny B.
2007-01-01
The objective of this challenge is to develop a data-based probabilistic model of uncertainty to predict the behavior of subsystems (payloads) by themselves and while coupled to a primary (target) system. Although this type of analysis is routinely performed and representative of issues faced in real-world system design and integration, there are still several key technical challenges that must be addressed when analyzing uncertain interconnected systems. For example, one key technical challenge is related to the fact that there is limited data on target configurations. Moreover, it is typical to have multiple data sets from experiments conducted at the subsystem level, but often samples sizes are not sufficient to compute high confidence statistics. In this challenge problem additional constraints are placed as ground rules for the participants. One such rule is that mathematical models of the subsystem are limited to linear approximations of the nonlinear physics of the problem at hand. Also, participants are constrained to use these models and the multiple data sets to make predictions about the target system response under completely different input conditions. Our approach involved initially the screening of several different methods. Three of the ones considered are presented herein. The first one is based on the transformation of the modal data to an orthogonal space where the mean and covariance of the data are matched by the model. The other two approaches worked solutions in physical space where the uncertain parameter set is made of masses, stiffnesses and damping coefficients; one matches confidence intervals of low order moments of the statistics via optimization while the second one uses a Kernel density estimation approach. The paper will touch on all the approaches, lessons learned, validation 1 metrics and their comparison, data quantity restriction, and assumptions/limitations of each approach. Keywords: Probabilistic modeling, model validation, uncertainty quantification, kernel density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevenson, Simon; Ohme, Frank; Fairhurst, Stephen, E-mail: simon.stevenson@ligo.org
2015-09-01
The coalescence of compact binaries containing neutron stars or black holes is one of the most promising signals for advanced ground-based laser interferometer gravitational-wave (GW) detectors, with the first direct detections expected over the next few years. The rate of binary coalescences and the distribution of component masses is highly uncertain, and population synthesis models predict a wide range of plausible values. Poorly constrained parameters in population synthesis models correspond to poorly understood astrophysics at various stages in the evolution of massive binary stars, the progenitors of binary neutron star and binary black hole systems. These include effects such asmore » supernova kick velocities, parameters governing the energetics of common envelope evolution and the strength of stellar winds. Observing multiple binary black hole systems through GWs will allow us to infer details of the astrophysical mechanisms that lead to their formation. Here we simulate GW observations from a series of population synthesis models including the effects of known selection biases, measurement errors and cosmology. We compare the predictions arising from different models and show that we will be able to distinguish between them with observations (or the lack of them) from the early runs of the advanced LIGO and Virgo detectors. This will allow us to narrow down the large parameter space for binary evolution models.« less
An Epoch of Reionization simulation pipeline based on BEARS
NASA Astrophysics Data System (ADS)
Krause, Fabian; Thomas, Rajat M.; Zaroubi, Saleem; Abdalla, Filipe B.
2018-10-01
The quest to unlock the mysteries of the Epoch of Reionization (EoR) is well poised with many experiments at diverse wavelengths beginning to gather data. Albeit these efforts, we are yet uncertain about the various factors that influence the EoR which include, the nature of the sources, their spectral characteristics (blackbody temperatures, power-law indices), clustering property, efficiency, duty cycle etc. Given these physical uncertainties that define the EoR, we need fast and efficient computational methods to model and analyze the data in order to provide confidence bounds on the parameters that influence the brightness temperature at 21-cm. Towards this goal we developed a pipeline that combines dark matter-only N-body simulations with exact 1-dimensional radiative transfer computations to approximate exact 3-dimensional radiative transfer. Because these simulations are about two to three orders of magnitude faster than the exact 3-dimensional methods, they can be used to explore the parameter space of the EoR systematically. A fast scheme like this pipeline could be incorporated into a Bayesian framework for parameter estimation. In this paper we detail the construction of the pipeline and describe how to use the software which is being made publicly available. We show the results of running the pipeline for four test cases of sources with various spectral energy distributions and compare their outputs using various statistics.
Planning spatial sampling of the soil from an uncertain reconnaissance variogram
NASA Astrophysics Data System (ADS)
Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.
2017-12-01
An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.
Optimal Regulation of Structural Systems with Uncertain Parameters.
1981-02-02
been addressed, in part, by Statistical Energy Analysis . Moti- vated by a concern with high frequency vibration and acoustical- structural...Parameter Systems," AFOSR-TR-79-0753 (May, 1979). 25. R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theory and Applications, (M.I.T...Press, Cambridge, Mass., 1975). 26. E. E. Ungar, " Statistical Energy Analysis of Vibrating Systems," Trans. ASME, J. Eng. Ind. 89, 626 (1967). 139 27
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Dealing with uncertainty in modeling intermittent water supply
NASA Astrophysics Data System (ADS)
Lieb, A. M.; Rycroft, C.; Wilkening, J.
2015-12-01
Intermittency in urban water supply affects hundreds of millions of people in cities around the world, impacting water quality and infrastructure. Building on previous work to dynamically model the transient flows in water distribution networks undergoing frequent filling and emptying, we now consider the hydraulic implications of uncertain input data. Water distribution networks undergoing intermittent supply are often poorly mapped, and household metering frequently ranges from patchy to nonexistent. In the face of uncertain pipe material, pipe slope, network connectivity, and outflow, we investigate how uncertainty affects dynamical modeling results. We furthermore identify which parameters exert the greatest influence on uncertainty, helping to prioritize data collection.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Zhao, Pengxiang; Zhou, Suhong
2018-01-01
Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392
Water Footprint and Water Consumption for the Main Crops and Biofuels Produced in Brazil
NASA Astrophysics Data System (ADS)
Sun, Y.; Tong, C.; Mansoor, K.; Carroll, S.
2011-12-01
The risk of CO2 leakage into shallow aquifers through various pathways such as faults and abandoned wells is a concern of CO2 geological sequestration. If a leak is detected in an aquifer system, a contingency plan is required to manage the CO2 storage and to protect the groundwater source. Among many remediation and mitigation strategies, the simplest is to stop CO2 leakage at a wellbore. Therefore, it is necessary to address whether and when the CO2 leaks should be sealed, and how much risk can be mitigated. In the presence of various uncertainties, including geological-structure uncertainty and parametric uncertainty, the risk of CO2 leakage into an aquifer needs to be assessed with probabilistic distributions of uncertain parameters. In this study, we developed an integrated model to simulate multiphase flow of CO2 and brine in a deep storage reservoir, through a leaky well at an uncertain location, and subsequently multicomponent reactive transport in a shallow aquifer. Each sub-model covers its domain-specific physics. Uncertainties of geological structure and parameters are considered together with decision variables (CO2 injection rate and mitigation time) for risk assessment of leakage-impacted aquifer volume. High-resolution and less-expensive reduced-order models (ROMs) of risk profiles are approximated as polynomial functions of decision variables and all uncertain parameters. These reduced-order models are then used in the place of computationally-expensive numerical models for future decision-making on if and when the leaky well is sealed. The tradeoff between CO2 storage capacity in the reservoir and the leakage-induced risk in the aquifer is evaluated. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Uncertainty Quantification and Risk Mitigation of CO2 Leakage in Groundwater Aquifers
NASA Astrophysics Data System (ADS)
Sun, Y.; Tong, C.; Mansoor, K.; Carroll, S.
2013-12-01
The risk of CO2 leakage into shallow aquifers through various pathways such as faults and abandoned wells is a concern of CO2 geological sequestration. If a leak is detected in an aquifer system, a contingency plan is required to manage the CO2 storage and to protect the groundwater source. Among many remediation and mitigation strategies, the simplest is to stop CO2 leakage at a wellbore. Therefore, it is necessary to address whether and when the CO2 leaks should be sealed, and how much risk can be mitigated. In the presence of various uncertainties, including geological-structure uncertainty and parametric uncertainty, the risk of CO2 leakage into an aquifer needs to be assessed with probabilistic distributions of uncertain parameters. In this study, we developed an integrated model to simulate multiphase flow of CO2 and brine in a deep storage reservoir, through a leaky well at an uncertain location, and subsequently multicomponent reactive transport in a shallow aquifer. Each sub-model covers its domain-specific physics. Uncertainties of geological structure and parameters are considered together with decision variables (CO2 injection rate and mitigation time) for risk assessment of leakage-impacted aquifer volume. High-resolution and less-expensive reduced-order models (ROMs) of risk profiles are approximated as polynomial functions of decision variables and all uncertain parameters. These reduced-order models are then used in the place of computationally-expensive numerical models for future decision-making on if and when the leaky well is sealed. The tradeoff between CO2 storage capacity in the reservoir and the leakage-induced risk in the aquifer is evaluated. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
A study of crystal growth by solution technique. [triglycine sulfate single crystals
NASA Technical Reports Server (NTRS)
Lal, R. B.
1979-01-01
The advantages and mechanisms of crystal growth from solution are discussed as well as the effects of impurity adsorption on the kinetics of crystal growth. Uncertainities regarding crystal growth in a low gravity environment are examined. Single crystals of triglycine sulfate were grown using a low temperature solution technique. Small components were assembled and fabricated for future space flights. A space processing experiment proposal accepted by NASA for the Spacelab-3 mission is included.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik
2018-01-01
This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS
While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
NASA Astrophysics Data System (ADS)
Ahmadian, A.; Ismail, F.; Salahshour, S.; Baleanu, D.; Ghaemi, F.
2017-12-01
The analysis of the behaviors of physical phenomena is important to discover significant features of the character and the structure of mathematical models. Frequently the unknown parameters involve in the models are assumed to be unvarying over time. In reality, some of them are uncertain and implicitly depend on several factors. In this study, to consider such uncertainty in variables of the models, they are characterized based on the fuzzy notion. We propose here a new model based on fractional calculus to deal with the Kelvin-Voigt (KV) equation and non-Newtonian fluid behavior model with fuzzy parameters. A new and accurate numerical algorithm using a spectral tau technique based on the generalized fractional Legendre polynomials (GFLPs) is developed to solve those problems under uncertainty. Numerical simulations are carried out and the analysis of the results highlights the significant features of the new technique in comparison with the previous findings. A detailed error analysis is also carried out and discussed.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Where There’s Smoke, There’s Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, Allison; Dubey, Manvendra
Cloaking urban areas and wildfire zones, tiny smoke particles suspended in the atmosphere have a sizeable effect on our climate. But the exact effect of many of these aerosols-such as how much sunlight they absorb, thus warming Earth, or reflecting back to space and so cooling Earth-is very uncertain.
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
On Non-Linear Sensitivity of Marine Biological Models to Parameter Variations
2007-01-01
M.B., 2002. Understanding uncertain enviromental systems. In: Grasman, J., van Straten, G. (Eds.), Predictability and Nonlinear Modelling in Natural...model evaluations to compute sensitivity indices. Comput. Phys. Commun. 145, 280–297. Saltelli, A., Andres, T.H., Homma, T., 1993. Some new techniques
Robust H ∞ Control for Spacecraft Rendezvous with a Noncooperative Target
Wu, Shu-Nan; Zhou, Wen-Ya; Tan, Shu-Jun; Wu, Guo-Qiang
2013-01-01
The robust H ∞ control for spacecraft rendezvous with a noncooperative target is addressed in this paper. The relative motion of chaser and noncooperative target is firstly modeled as the uncertain system, which contains uncertain orbit parameter and mass. Then the H ∞ performance and finite time performance are proposed, and a robust H ∞ controller is developed to drive the chaser to rendezvous with the non-cooperative target in the presence of control input saturation, measurement error, and thrust error. The linear matrix inequality technology is used to derive the sufficient condition of the proposed controller. An illustrative example is finally provided to demonstrate the effectiveness of the controller. PMID:24027446
Feedforward/feedback control synthesis for performance and robustness
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang
1990-01-01
Both feedforward and feedback control approaches for uncertain dynamical systems are investigated. The control design objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant modeling uncertainty. Preshapong of an ideal, time-optimal control input using a 'tapped-delay' filter is shown to provide a rapid maneuver with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. The proposed feedforward/feedback control approach is robust for a certain class of uncertain dynamical systems, since the control input command computed for a given desired output does not depend on the plant parameters.
Stability margin of linear systems with parameters described by fuzzy numbers.
Husek, Petr
2011-10-01
This paper deals with the linear systems with uncertain parameters described by fuzzy numbers. The problem of determining the stability margin of those systems with linear affine dependence of the coefficients of a characteristic polynomial on system parameters is studied. Fuzzy numbers describing the system parameters are allowed to be characterized by arbitrary nonsymmetric membership functions. An elegant solution, graphical in nature, based on generalization of the Tsypkin-Polyak plot is presented. The advantage of the presented approach over the classical robust concept is demonstrated on a control of the Fiat Dedra engine model and a control of the quarter car suspension model.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D
2009-11-01
While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.
Pinto, Nicolas; Doukhan, David; DiCarlo, James J.; Cox, David D.
2009-01-01
While many models of biological object recognition share a common set of “broad-stroke” properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model—e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct “parts” have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision. PMID:19956750
Stochastic Ocean Predictions with Dynamically-Orthogonal Primitive Equations
NASA Astrophysics Data System (ADS)
Subramani, D. N.; Haley, P., Jr.; Lermusiaux, P. F. J.
2017-12-01
The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex and intermittent with unstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. For efficient and rigorous quantification and prediction of these uncertainities, the stochastic Dynamically Orthogonal (DO) PDEs for a primitive equation ocean modeling system with a nonlinear free-surface are derived and numerical schemes for their space-time integration are obtained. Detailed numerical studies with idealized-to-realistic regional ocean dynamics are completed. These include consistency checks for the numerical schemes and comparisons with ensemble realizations. As an illustrative example, we simulate the 4-d multiscale uncertainty in the Middle Atlantic/New York Bight region during the months of Jan to Mar 2017. To provide intitial conditions for the uncertainty subspace, uncertainties in the region were objectively analyzed using historical data. The DO primitive equations were subsequently integrated in space and time. The probability distribution function (pdf) of the ocean fields is compared to in-situ, remote sensing, and opportunity data collected during the coincident POSYDON experiment. Results show that our probabilistic predictions had skill and are 3- to 4- orders of magnitude faster than classic ensemble schemes.
Forecasting seasonal influenza with a state-space SIR model
Osthus, Dave; Hickmann, Kyle S.; Caragea, Petruţa C.; ...
2017-04-08
Seasonal influenza is a serious public health and societal problem due to its consequences resulting from absenteeism, hospitalizations, and deaths. The overall burden of influenza is captured by the Centers for Disease Control and Prevention’s influenza-like illness network, which provides invaluable information about the current incidence. This information is used to provide decision support regarding prevention and response efforts. Despite the relatively rich surveillance data and the recurrent nature of seasonal influenza, forecasting the timing and intensity of seasonal influenza in the U.S. remains challenging because the form of the disease transmission process is uncertain, the disease dynamics are onlymore » partially observed, and the public health observations are noisy. Fitting a probabilistic state-space model motivated by a deterministic mathematical model [a susceptible-infectious-recovered (SIR) model] is a promising approach for forecasting seasonal influenza while simultaneously accounting for multiple sources of uncertainty. A significant finding of this work is the importance of thoughtfully specifying the prior, as results critically depend on its specification. Our conditionally specified prior allows us to exploit known relationships between latent SIR initial conditions and parameters and functions of surveillance data. We demonstrate advantages of our approach relative to alternatives via a forecasting comparison using several forecast accuracy metrics.« less
Forecasting seasonal influenza with a state-space SIR model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Hickmann, Kyle S.; Caragea, Petruţa C.
Seasonal influenza is a serious public health and societal problem due to its consequences resulting from absenteeism, hospitalizations, and deaths. The overall burden of influenza is captured by the Centers for Disease Control and Prevention’s influenza-like illness network, which provides invaluable information about the current incidence. This information is used to provide decision support regarding prevention and response efforts. Despite the relatively rich surveillance data and the recurrent nature of seasonal influenza, forecasting the timing and intensity of seasonal influenza in the U.S. remains challenging because the form of the disease transmission process is uncertain, the disease dynamics are onlymore » partially observed, and the public health observations are noisy. Fitting a probabilistic state-space model motivated by a deterministic mathematical model [a susceptible-infectious-recovered (SIR) model] is a promising approach for forecasting seasonal influenza while simultaneously accounting for multiple sources of uncertainty. A significant finding of this work is the importance of thoughtfully specifying the prior, as results critically depend on its specification. Our conditionally specified prior allows us to exploit known relationships between latent SIR initial conditions and parameters and functions of surveillance data. We demonstrate advantages of our approach relative to alternatives via a forecasting comparison using several forecast accuracy metrics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Robust control of seismically excited cable stayed bridges with MR dampers
NASA Astrophysics Data System (ADS)
YeganehFallah, Arash; Khajeh Ahamd Attari, Nader
2017-03-01
In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
NASA Astrophysics Data System (ADS)
Heimbach, P.; Bugnion, V.
2008-12-01
We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. MacAyeal (1992) introduced adjoints in the context of applying control theory to estimate basal sliding parameters (basal shear stress, basal friction) of an ice stream model which minimize a least-squares model vs. observation misfit. Since then, this method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters. However, no attempt has been made to extend this method to comprehensive ice sheet models. Here, we present a first step toward moving beyond limiting the use of control theory to ice stream models. We have generated an adjoint of the three-dimensional thermo-mechanical ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated using the automatic differentiation (AD) tool TAF. TAF generates exact source code representing the tangent linear and adjoint model of the parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic or "cost function" with respect to the controls, and can be efficiently calculated via the adjoint. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. To gain insight into the adjoint solutions, we explore various cost functions, such as local and domain-integrated ice temperature, total ice volume or the velocity of ice at the margins of the ice sheet. Elements of our control space include initial cold ice temperatures, surface mass balance, as well as parameters such as appear in Glen's flow law, or in the surface degree-day or basal sliding parameterizations. Sensitivity maps provide a comprehensive view, and allow a quantification of where and to which variables the ice sheet model is most sensitive to. The model used in the present study includes simplifications in the model physics, parameterizations which rely on uncertain empirical constants, and is unable to capture fast ice streams. Nevertheless, as a proof-of-concept, this method can readily be extended to incorporate higher-order physics or parameterizations (or be applied to other models). It also opens the door to ice sheet state estimation: using the model's physics jointly with field and satellite observations to estimate a best estimate of the state of the ice sheets.
While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...
Adaptive proximate time-optimal servomechanisms - Continuous time case
NASA Technical Reports Server (NTRS)
Workman, M. L.; Kosut, R. L.; Franklin, G. F.
1987-01-01
A Proximate Time-Optimal Servo (PTOS) is developed, along with conditions for its stability. An algorithm is proposed for adapting the PTOS (APTOS) to improve performance in the face of uncertain plant parameters. Under ideal conditions APTOS is shown to be uniformly asymptotically stable. Simulation results demonstrate the predicted performance.
Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.
2013-08-01
Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.
NASA Astrophysics Data System (ADS)
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic parameters of the system.
NASA Astrophysics Data System (ADS)
Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.
2017-12-01
Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.
Sediment load from major rivers into Puget Sound and its adjacent waters
Czuba, Jonathan A.; Magirl, Christopher S.; Czuba, Christiana R.; Grossman, Eric E.; Curran, Christopher A.; Gendaszek, Andrew S.; Dinicola, Richard S.
2011-01-01
Each year, an estimated load of 6.5 million tons of sediment is transported by rivers to Puget Sound and its adjacent waters—enough to cover a football field to the height of six Space Needles. This estimated load is highly uncertain because sediment studies and available sediment-load data are sparse and historically limited to specific rivers, short time frames, and a narrow range of hydrologic conditions. The largest sediment loads are carried by rivers with glaciated volcanoes in their headwaters. Research suggests 70 percent of the sediment load delivered to Puget Sound is from rivers and 30 percent is from shoreline erosion, but the magnitude of specific contributions is highly uncertain. Most of a river's sediment load occurs during floods.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
NASA Astrophysics Data System (ADS)
Azizi, S.; Torres, L. A. B.; Palhares, R. M.
2018-01-01
The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.
Damage identification using inverse methods.
Friswell, Michael I
2007-02-15
This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
Towards quantifying uncertainty in Greenland's contribution to 21st century sea-level rise
NASA Astrophysics Data System (ADS)
Perego, M.; Tezaur, I.; Price, S. F.; Jakeman, J.; Eldred, M.; Salinger, A.; Hoffman, M. J.
2015-12-01
We present recent work towards developing a methodology for quantifying uncertainty in Greenland's 21st century contribution to sea-level rise. While we focus on uncertainties associated with the optimization and calibration of the basal sliding parameter field, the methodology is largely generic and could be applied to other (or multiple) sets of uncertain model parameter fields. The first step in the workflow is the solution of a large-scale, deterministic inverse problem, which minimizes the mismatch between observed and computed surface velocities by optimizing the two-dimensional coefficient field in a linear-friction sliding law. We then expand the deviation in this coefficient field from its estimated "mean" state using a reduced basis of Karhunen-Loeve Expansion (KLE) vectors. A Bayesian calibration is used to determine the optimal coefficient values for this expansion. The prior for the Bayesian calibration can be computed using the Hessian of the deterministic inversion or using an exponential covariance kernel. The posterior distribution is then obtained using Markov Chain Monte Carlo run on an emulator of the forward model. Finally, the uncertainty in the modeled sea-level rise is obtained by performing an ensemble of forward propagation runs. We present and discuss preliminary results obtained using a moderate-resolution model of the Greenland Ice sheet. As demonstrated in previous work, the primary difficulty in applying the complete workflow to realistic, high-resolution problems is that the effective dimension of the parameter space is very large.
Effective techniques for the identification and accommodation of disturbances
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1989-01-01
The successful control of dynamic systems such as space stations, or launch vehicles, requires a controller design methodology that acknowledges and addresses the disruptive effects caused by external and internal disturbances that inevitably act on such systems. These disturbances, technically defined as uncontrollable inputs, typically vary with time in an uncertain manner and usually cannot be directly measured in real time. A relatively new non-statistical technique for modeling, and (on-line) identification, of those complex uncertain disturbances that are not as erratic and capricious as random noise is described. This technique applies to multi-input cases and to many of the practical disturbances associated with the control of space stations, or launch vehicles. Then, a collection of smart controller design techniques that allow controlled dynamic systems, with possible multi-input controls, to accommodate (cope with) such disturbances with extraordinary effectiveness are associated. These new smart controllers are designed by non-statistical techniques and typically turn out to be unconventional forms of dynamic linear controllers (compensators) with constant coefficients. The simplicity and reliability of linear, constant coefficient controllers is well-known in the aerospace field.
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
Huang, Tingwen; Li, Chuandong; Duan, Shukai; Starzyk, Janusz A
2012-06-01
This paper focuses on the hybrid effects of parameter uncertainty, stochastic perturbation, and impulses on global stability of delayed neural networks. By using the Ito formula, Lyapunov function, and Halanay inequality, we established several mean-square stability criteria from which we can estimate the feasible bounds of impulses, provided that parameter uncertainty and stochastic perturbations are well-constrained. Moreover, the present method can also be applied to general differential systems with stochastic perturbation and impulses.
Research in space commercialization, technology transfer, and communications
NASA Technical Reports Server (NTRS)
1982-01-01
Research and internship programs in technology transfer, space commercialization, and information and communications policy are described. The intern's activities are reviewed. On-campus research involved work on the costs of conventional telephone technology in rural areas, an investigation of the lag between the start of a research and development project and the development of new technology, using NASA patent and patent waiver data, studies of the financial impact and economic prospects of a space operation center, a study of the accuracy of expert forecasts of uncertain quantities and a report on frequency coordination in the fixed and fixed satellite services at 4 and 6 GHz.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
V473 Lyr, a modulated, period-doubled Cepheid, and U TrA, a double-mode Cepheid, observed by MOST
NASA Astrophysics Data System (ADS)
Molnár, L.; Derekas, A.; Szabó, R.; Matthews, J. M.; Cameron, C.; Moffat, A. F. J.; Richardson, N. D.; Csák, B.; Dózsa, Á.; Reed, P.; Szabados, L.; Heathcote, B.; Bohlsen, T.; Cacella, P.; Luckas, P.; Sódor, Á.; Skarka, M.; Szabó, Gy. M.; Plachy, E.; Kovács, J.; Evans, N. R.; Kolenberg, K.; Collins, K. A.; Pepper, J.; Stassun, K. G.; Rodriguez, J. E.; Siverd, R. J.; Henden, A.; Mankiewicz, L.; Żarnecki, A. F.; Cwiek, A.; Sokolowski, M.; Pál, A.; Guenther, D. B.; Kuschnig, R.; Rowe, J.; Rucinski, S. M.; Sasselov, D.; Weiss, W. W.
2017-04-01
Space-based photometric measurements first revealed low-amplitude irregularities in the pulsations of Cepheid stars, but their origins and how commonly they occur remain uncertain. To investigate this phenomenon, we present MOST space telescope photometry of two Cepheids. V473 Lyrae is a second-overtone, strongly modulated Cepheid, while U Trianguli Australis is a Cepheid pulsating simultaneously in the fundamental mode and first overtone. The nearly continuous, high-precision photometry reveals alternations in the amplitudes of cycles in V473 Lyr, the first case of period doubling detected in a classical Cepheid. In U TrA, we tentatively identify one peak as the fX or 0.61-type mode often seen in conjunction with the first radial overtone in Cepheids, but given the short length of the data, we cannot rule out that it is a combination peak instead. Ground-based photometry and spectroscopy were obtained to follow two modulation cycles in V473 Lyr and to better specify its physical parameters. The simultaneous data yield the phase lag parameter (the phase difference between maxima in luminosity and radial velocity) of a second-overtone Cepheid for the first time. We find no evidence for a period change in U TrA or an energy exchange between the fundamental mode and the first overtone during the last 50 yr, contrary to earlier indications. Period doubling in V473 Lyr provides a strong argument that mode interactions do occur in some Cepheids and we may hypothesize that it could be behind the amplitude modulation, as recently proposed for Blazhko RR Lyrae stars.
3C 57 as an atypical radio-loud quasar: implications for the radio-loud/radio-quiet dichotomy
NASA Astrophysics Data System (ADS)
Sulentic, J. W.; Martínez-Carballo, M. A.; Marziani, P.; del Olmo, A.; Stirpe, G. M.; Zamfir, S.; Plauchu-Frayn, I.
2015-06-01
Lobe-dominated radio-loud (LD RL) quasars occupy a restricted domain in the 4D Eigenvector 1 (4DE1) parameter space which implies restricted geometry/physics/kinematics for this subclass compared to the radio-quiet (RQ) majority of quasars. We discuss how this restricted domain for the LD RL parent population supports the notion for a RQ-RL dichotomy among type 1 sources. 3C 57 is an atypical RL quasar that shows both uncertain radio morphology and falls in a region of 4DE1 space where RL quasars are rare. We present new radio flux and optical spectroscopic measures designed to verify its atypical optical/UV spectroscopic behaviour and clarify its radio structure. The former data confirms that 3C 57 falls off the 4DE1 quasar `main sequence' with both extreme optical Fe II emission (R_{Fe II} ˜ 1) and a large C IV λ1549 profile blueshift (˜-1500 km s-1). These parameter values are typical of extreme Population A sources which are almost always RQ. New radio measures show no evidence for flux change over a 50+ year time-scale consistent with compact steep-spectrum (or young LD) over core-dominated morphology. In the 4DE1 context where LD RL are usually low L/LEdd quasars, we suggest that 3C 57 is an evolved RL quasar (i.e. large blackhole mass) undergoing a major accretion event leading to a rejuvenation reflected by strong Fe II emission, perhaps indicating significant heavy metal enrichment, high bolometric luminosity for a low-redshift source and resultant unusually high Eddington ratio giving rise to the atypical C IV λ1549.
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.
2016-12-01
The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.
NASA Astrophysics Data System (ADS)
Bonev, Boncho P.; Hansen, Gary B.; Glenar, David A.; James, Philip B.; Bjorkman, Jon E.
2008-02-01
It is uncertain whether the residual (perennial) south polar cap on Mars is a transitory or a permanent feature in the current Martian climate. While there is no firm evidence for complete disappearance of the cap in the past, clearly observable changes have been documented. Observations suggest that the perennial cap lost more CO 2 material in the spring/summer season prior to the Mariner 9 mission than in those same seasons monitored by Viking and Mars Global Surveyor. In this paper we examine one process that may contribute to these changes - the radiative effects of a planet encircling dust storm that starts during late Martian southern spring on the stability of the perennial south polar cap. To approach this, we model the radiative transfer through a dusty planetary atmosphere bounded by a sublimating CO 2 surface. A critical parameter for this modeling is the surface albedo spectrum from the near-UV to the thermal-IR, which was determined from both space-craft and Earth-based observations covering multiple wavelength regimes. Such a multi-wavelength approach is highly desirable since one spectral band by itself cannot tightly constrain the three-parameter space for polar surface albedo models, namely photon "scattering length" in the CO 2 ice and the amounts of intermixed water and dust. Our results suggest that a planet-encircling dust storm with onset near solstice can affect the perennial cap's stability, leading to advanced sublimation in a "dusty" year. Since the total amount of solid CO 2 removed by a single storm may be less than the total CO 2 thickness, a series of dust storms would be required to remove the entire residual CO 2 ice layer from the south perennial cap.
Tahoun, A H
2017-01-01
In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Hempler, Daniela; Schmidt, Martin U; van de Streek, Jacco
2017-08-01
More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic coordinates of all non-H atoms is established to be 0.2 Å. For 98.5% of 200 molecular crystal structures published with missed symmetry, the correct space group is identified; there are no false positives. Very small, very symmetrical molecules can end up in artificially high space groups upon energy minimization, although this is easily detected through visual inspection. If the space group of a crystal structure determined from powder diffraction data is ambiguous, energy minimization with DFT-D provides a fast and reliable method to select the correct space group.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
In this paper, neurodynamic programming-based output feedback boundary control of distributed parameter systems governed by uncertain coupled semilinear parabolic partial differential equations (PDEs) under Neumann or Dirichlet boundary control conditions is introduced. First, Hamilton-Jacobi-Bellman (HJB) equation is formulated in the original PDE domain and the optimal control policy is derived using the value functional as the solution of the HJB equation. Subsequently, a novel observer is developed to estimate the system states given the uncertain nonlinearity in PDE dynamics and measured outputs. Consequently, the suboptimal boundary control policy is obtained by forward-in-time estimation of the value functional using a neural network (NN)-based online approximator and estimated state vector obtained from the NN observer. Novel adaptive tuning laws in continuous time are proposed for learning the value functional online to satisfy the HJB equation along system trajectories while ensuring the closed-loop stability. Local uniformly ultimate boundedness of the closed-loop system is verified by using Lyapunov theory. The performance of the proposed controller is verified via simulation on an unstable coupled diffusion reaction process.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Probabilistic accounting of uncertainty in forecasts of species distributions under climate change
Seth J. Wenger; Nicholas A. Som; Daniel C. Dauwalter; Daniel J. Isaak; Helen M. Neville; Charles H. Luce; Jason B. Dunham; Michael K. Young; Kurt D. Fausch; Bruce E. Rieman
2013-01-01
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing...
Incentive Control Strategies for Decision Problems with Parametric Uncertainties
NASA Astrophysics Data System (ADS)
Cansever, Derya H.
The central theme of this thesis is the design of incentive control policies in large scale systems with hierarchical decision structures, under the stipulation that the objective functionals of the agents at the lower level of the hierarchy are uncertain to the top-level controller (the leader). These uncertainties are modeled as a finite -dimensional parameter vector whose exact value constitutes private information to the relevant agent at the lower level. The approach we have adopted is to design incentive policies for the leader such that the dependence of the decision of the agents on the uncertain parameter is minimized. We have identified several classes of problems for which this approach is feasible. In particular, we have constructed policies whose performance is arbitrarily close to the solution of a version of the same problem that does not involve uncertainties. We have also shown that for a certain class of problem wherein the leader observes a linear combination of the agents' decisions, the leader can achieve the performance he would obtain if he had observed each decision separately.
Quantitative local analysis of nonlinear systems
NASA Astrophysics Data System (ADS)
Topcu, Ufuk
This thesis investigates quantitative methods for local robustness and performance analysis of nonlinear dynamical systems with polynomial vector fields. We propose measures to quantify systems' robustness against uncertainties in initial conditions (regions-of-attraction) and external disturbances (local reachability/gain analysis). S-procedure and sum-of-squares relaxations are used to translate Lyapunov-type characterizations to sum-of-squares optimization problems. These problems are typically bilinear/nonconvex (due to local analysis rather than global) and their size grows rapidly with state/uncertainty space dimension. Our approach is based on exploiting system theoretic interpretations of these optimization problems to reduce their complexity. We propose a methodology incorporating simulation data in formal proof construction enabling more reliable and efficient search for robustness and performance certificates compared to the direct use of general purpose solvers. This technique is adapted both to region-of-attraction and reachability analysis. We extend the analysis to uncertain systems by taking an intentionally simplistic and potentially conservative route, namely employing parameter-independent rather than parameter-dependent certificates. The conservatism is simply reduced by a branch-and-hound type refinement procedure. The main thrust of these methods is their suitability for parallel computing achieved by decomposing otherwise challenging problems into relatively tractable smaller ones. We demonstrate proposed methods on several small/medium size examples in each chapter and apply each method to a benchmark example with an uncertain short period pitch axis model of an aircraft. Additional practical issues leading to a more rigorous basis for the proposed methodology as well as promising further research topics are also addressed. We show that stability of linearized dynamics is not only necessary but also sufficient for the feasibility of the formulations in region-of-attraction analysis. Furthermore, we generalize an upper bound refinement procedure in local reachability/gain analysis which effectively generates non-polynomial certificates from polynomial ones. Finally, broader applicability of optimization-based tools stringently depends on the availability of scalable/hierarchial algorithms. As an initial step toward this direction, we propose a local small-gain theorem and apply to stability region analysis in the presence of unmodeled dynamics.
Valuing natural gas power generation assets in the new competitive marketplace
NASA Astrophysics Data System (ADS)
Hsu, Michael Chun-Wei
1999-10-01
The profitability of natural gas fired power plants depends critically on the spread between electricity and natural gas prices. The price levels of these two energy commodities are the key uncertain variables in determining the operating margin and therefore the value of a power plant. The owner of a generation unit has the decision of dispatching the plant only when profit margins are positive. This operating flexibility is a real option with real value. In this dissertation I introduce the spark spread call options and illustrate how such paper contracts replicate the uncertain payoff space facing power asset owners and, therefore, how the financial options framework can be applied in estimating the value of natural gas generation plants. The intrinsic value of gas power plants is approximated as the sum of a series of spark spread call options with succeeding maturity dates. The Black-Scholes spread option pricing model, with volatility and correlation term structure adjustments, is utilized to price the spark spread options. Sensitivity analysis is also performed on the BS spread option formulation to compare different asset types. In addition I explore the potential of using compound and compound-exchange option concepts to evaluate, respectively, the benefits of delaying investment in new generation and in repowering existing antiquated units. The compound option designates an option on top of another option. In this case the series of spark spread call options is the 'underlying' option while the option to delay new investments is the 'overlying.' The compound-exchange option characterizes the opportunity to 'exchange' the old power plant, with its series of spark spread call options, for a set of new spark spread call options that comes with the new generation unit. The strike price of the compound-exchange option is the repowering capital investment and typically includes the purchase of new steam generators and combustion turbines, as well as other facility upgrades. The pricing results using the proposed repowering option approach is compared to the sale prices from recent power plant auctions. Sensitivity of the repowering option model is also examined and the critical parameters al parameters identified.
A Place from where to Speak: The University and Academic Freedom
ERIC Educational Resources Information Center
Badley, Graham
2009-01-01
The university is promoted as "a place from where to speak". Academic freedom is examined as a crucial value in an increasingly uncertain age which resonates with Barnett's concern to encourage students to overcome their "fear of freedom". My concern is that the putative university space of freedom and autonomy may well become constricted by those…
ERIC Educational Resources Information Center
Reid, Hazel; West, Linden
2016-01-01
This paper explores the constraints to innovative, creative and reflexive careers counselling in an uncertain neo-liberal world. We draw on previously reported research into practitioners' use of a narrative model for career counselling interviews in England and a Europe-wide auto/biographical narrative study of non-traditional learners in…
Intelligent Control of Flexible-Joint Robotic Manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Gallegos, G.
1997-01-01
This paper considers the trajectory tracking problem for uncertain rigid-link. flexible.joint manipulators, and presents a new intelligent controller as a solution to this problem. The proposed control strategy is simple and computationally efficient, requires little information concerning either the manipulator or actuator/transmission models and ensures uniform boundedness of all signals and arbitrarily accurate task-space trajectory tracking.
NASA Technical Reports Server (NTRS)
Geering, H. P.; Athans, M.
1973-01-01
A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
NASA Astrophysics Data System (ADS)
Chen, Liang-Ming; Lv, Yue-Yong; Li, Chuan-Jiang; Ma, Guang-Fu
2016-12-01
In this paper, we investigate cooperatively surrounding control (CSC) of multi-agent systems modeled by Euler-Lagrange (EL) equations under a directed graph. With the consideration of the uncertain dynamics in an EL system, a backstepping CSC algorithm combined with neural-networks is proposed first such that the agents can move cooperatively to surround the stationary target. Then, a command filtered backstepping CSC algorithm is further proposed to deal with the constraints on control input and the absence of neighbors’ velocity information. Numerical examples of eight satellites surrounding one space target illustrate the effectiveness of the theoretical results. Project supported by the National Basic Research Program of China (Grant No. 2012CB720000) and the National Natural Science Foundation of China (Grant Nos. 61304005 and 61403103).
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
Siade, Adam J.; Nishikawa, Tracy; Martin, Peter
2015-01-01
Groundwater has provided 50–90 % of the total water supply in Antelope Valley, California (USA). The associated groundwater-level declines have led the Los Angeles County Superior Court of California to recently rule that the Antelope Valley groundwater basin is in overdraft, i.e., annual pumpage exceeds annual recharge. Natural recharge consists primarily of mountain-front recharge and is an important component of the total groundwater budget in Antelope Valley. Therefore, natural recharge plays a major role in the Court’s decision. The exact quantity and distribution of natural recharge is uncertain, with total estimates from previous studies ranging from 37 to 200 gigaliters per year (GL/year). In order to better understand the uncertainty associated with natural recharge and to provide a tool for groundwater management, a numerical model of groundwater flow and land subsidence was developed. The transient model was calibrated using PEST with water-level and subsidence data; prior information was incorporated through the use of Tikhonov regularization. The calibrated estimate of natural recharge was 36 GL/year, which is appreciably less than the value used by the court (74 GL/year). The effect of parameter uncertainty on the estimation of natural recharge was addressed using the Null-Space Monte Carlo method. A Pareto trade-off method was also used to portray the reasonableness of larger natural recharge rates. The reasonableness of the 74 GL/year value and the effect of uncertain pumpage rates were also evaluated. The uncertainty analyses indicate that the total natural recharge likely ranges between 34.5 and 54.3 GL/year.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
NASA Astrophysics Data System (ADS)
Catinari, Federico; Pierdicca, Alessio; Clementi, Francesco; Lenci, Stefano
2017-11-01
The results of an ambient-vibration based investigation conducted on the "Palazzo del Podesta" in Montelupone (Italy) is presented. The case study was damaged during the 20I6 Italian earthquakes that stroke the central part of the Italy. The assessment procedure includes full-scale ambient vibration testing, modal identification from ambient vibration responses, finite element modeling and dynamic-based identification of the uncertain structural parameters of the model. A very good match between theoretical and experimental modal parameters was reached and the model updating has been performed identifying some structural parameters.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Application of control theory to dynamic systems simulation
NASA Technical Reports Server (NTRS)
Auslander, D. M.; Spear, R. C.; Young, G. E.
1982-01-01
The application of control theory is applied to dynamic systems simulation. Theory and methodology applicable to controlled ecological life support systems are considered. Spatial effects on system stability, design of control systems with uncertain parameters, and an interactive computing language (PARASOL-II) designed for dynamic system simulation, report quality graphics, data acquisition, and simple real time control are discussed.
Turbulence Characteristics of Swirling Flowfields. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Jackson, T. W.
1983-01-01
Combustor design phenomena; recirculating flows research; single-wire, six-orientation, eddy dissipation rate, and turbulence modeling measurement; directional sensitivity (DS); calibration equipment, confined jet facility, and hot-wire instrumentation; effects of swirl, strong contraction nozzle, and expansion ratio; and turbulence parameters; uncertain; and DS in laminar jets; turbulent nonswirling jets, and turbulent swirling jets are discussed.
NASA Technical Reports Server (NTRS)
Hsia, Wei Shen
1989-01-01
A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.
Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...
2013-02-23
We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less
Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.
Arunkumar, A; Sakthivel, R; Mathiyalagan, K; Park, Ju H
2014-07-01
This paper focuses the issue of robust stochastic stability for a class of uncertain fuzzy Markovian jumping discrete-time neural networks (FMJDNNs) with various activation functions and mixed time delay. By employing the Lyapunov technique and linear matrix inequality (LMI) approach, a new set of delay-dependent sufficient conditions are established for the robust stochastic stability of uncertain FMJDNNs. More precisely, the parameter uncertainties are assumed to be time varying, unknown and norm bounded. The obtained stability conditions are established in terms of LMIs, which can be easily checked by using the efficient MATLAB-LMI toolbox. Finally, numerical examples with simulation result are provided to illustrate the effectiveness and less conservativeness of the obtained results. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, Leimin; Shen, Yi; Sheng, Yin
2016-04-01
This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.
Energy management and attitude control for spacecraft
NASA Astrophysics Data System (ADS)
Costic, Bret Thomas
2001-07-01
This PhD dissertation describes the design and implementation of various control strategies centered around spacecraft applications: (i) an attitude control system for spacecraft, (ii) flywheels used for combined attitude and energy tracking, and (iii) an adaptive autobalancing control algorithm. The theory found in each of these sections is demonstrated through simulation or experimental results. An introduction to each of these three primary chapters can be found in chapter one. The main problem addressed in the second chapter is the quaternion-based, attitude tracking control of rigid spacecraft without angular velocity measurements and in the presence of an unknown inertia matrix. As a stepping-stone, an adaptive, full-state feedback controller that compensates for parametric uncertainty while ensuring asymptotic attitude tracking errors is designed. The adaptive, full-state feedback controller is then redesigned such that the need for angular velocity measurements is eliminated. The proposed adaptive, output feedback controller ensures asymptotic attitude tracking. This work uses a four-parameter representation of the spacecraft attitude that does not exhibit singular orientations as in the case of the previous three-parameter representation-based results. To the best of my knowledge, this represents the first solution to the adaptive, output feedback, attitude tracking control problem for the quaternion representation. Simulation results are included to illustrate the performance of the proposed output feedback control strategy. The third chapter is devoted to the use of multiple flywheels that integrate the energy storage and attitude control functions in space vehicles. This concept, which is referred to as an Integrated Energy Management and Attitude Control (IEMAC) system, reduces the space vehicle bus mass, volume, cost, and maintenance requirements while maintaining or improving the space vehicle performance. To this end, two nonlinear IEMAC strategies (model-based and adaptive) that simultaneously track a desired attitude trajectory and desired energy/power profile are presented. Both strategies ensure asymptotic tracking while the adaptive controller compensates for uncertain spacecraft inertia. In the final chapter, a control strategy is designed for a rotating, unbalanced disk. The control strategy, which is composed of a control torque and two control forces, regulates the disk displacement and ensures angular velocity tracking. The controller uses a desired compensation adaptation law and a gain adjusted forgetting factor to achieve exponential stability despite the lack of knowledge of the imbalance-related parameters, provided a mild persistency of excitation condition is satisfied.
Crew collaboration in space: a naturalistic decision-making perspective
NASA Technical Reports Server (NTRS)
Orasanu, Judith
2005-01-01
Successful long-duration space missions will depend on the ability of crewmembers to respond promptly and effectively to unanticipated problems that arise under highly stressful conditions. Naturalistic decision making (NDM) exploits the knowledge and experience of decision makers in meaningful work domains, especially complex sociotechnical systems, including aviation and space. Decision making in these ambiguous, dynamic, high-risk environments is a complex task that involves defining the nature of the problem and crafting a response to achieve one's goals. Goal conflicts, time pressures, and uncertain outcomes may further complicate the process. This paper reviews theory and research pertaining to the NDM model and traces some of the implications for space crews and other groups that perform meaningful work in extreme environments. It concludes with specific recommendations for preparing exploration crews to use NDM effectively.
Unpacking the Terms of Engagement with Local Food at the Farmers' Market: Insights from Ontario
ERIC Educational Resources Information Center
Smithers, John; Lamarche, Jeremy; Joseph, Alun E.
2008-01-01
Amidst much discussion of the values and venues of local food, the Farmers' Market (FM) has emerged as an important, but somewhat uncertain, site of engagement for producers, consumers and local food "champions". Despite the evident certainty of various operational rules, the FM should be seen as a complex and ambiguous space where…
NASA Astrophysics Data System (ADS)
Miyakita, Takeshi; Hatakenaka, Ryuta; Sugita, Hiroyuki; Saitoh, Masanori; Hirai, Tomoyuki
2014-11-01
For conventional Multi-Layer Insulation (MLI) blankets, it is difficult to control the layer density and the thermal insulation performance degrades due to the increase in conductive heat leak through interlayer contacts. At low temperatures, the proportion of conductive heat transfer through MLI blankets is large compared to that of radiative heat transfer, hence the decline in thermal insulation performance is significant. A new type of MLI blanket using new spacers; the Non-Interlayer-Contact Spacer MLI (NICS MLI) has been developed. This new MLI blanket uses small discrete spacers and can exclude uncertain interlayer contact between films. It is made of polyetheretherketone (PEEK) making it suitable for space use. The cross-sectional area to length ratio of the spacer is 1.0 × 10-5 m with a 10 mm diameter and 4 mm height. The insulation performance is measured with a boil-off calorimeter. Because the NICS MLI blanket can exclude uncertain interlayer contact, the test results showed good agreement with estimations. Furthermore, the NICS MLI blanket shows significantly good insulation performance (effective emissivity is 0.0046 at ordinary temperature), particularly at low temperatures, due to the high thermal resistance of this spacer.
Bifurcation Analysis Using Rigorous Branch and Bound Methods
NASA Technical Reports Server (NTRS)
Smith, Andrew P.; Crespo, Luis G.; Munoz, Cesar A.; Lowenberg, Mark H.
2014-01-01
For the study of nonlinear dynamic systems, it is important to locate the equilibria and bifurcations occurring within a specified computational domain. This paper proposes a new approach for solving these problems and compares it to the numerical continuation method. The new approach is based upon branch and bound and utilizes rigorous enclosure techniques to yield outer bounding sets of both the equilibrium and local bifurcation manifolds. These sets, which comprise the union of hyper-rectangles, can be made to be as tight as desired. Sufficient conditions for the existence of equilibrium and bifurcation points taking the form of algebraic inequality constraints in the state-parameter space are used to calculate their enclosures directly. The enclosures for the bifurcation sets can be computed independently of the equilibrium manifold, and are guaranteed to contain all solutions within the computational domain. A further advantage of this method is the ability to compute a near-maximally sized hyper-rectangle of high dimension centered at a fixed parameter-state point whose elements are guaranteed to exclude all bifurcation points. This hyper-rectangle, which requires a global description of the bifurcation manifold within the computational domain, cannot be obtained otherwise. A test case, based on the dynamics of a UAV subject to uncertain center of gravity location, is used to illustrate the efficacy of the method by comparing it with numerical continuation and to evaluate its computational complexity.
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.; Christiansen, Eric L.; Fleming, Michael L.
1990-01-01
A great deal of experimentation and analysis was performed to quantify penetration thresholds of components which will experience orbital debris impacts. Penetration was found to depend upon mission specific parameters such as orbital altitude, inclination, and orientation of the component; and upon component specific parameters such as material, density and the geometry particular to its shielding. Experimental results are highly dependent upon shield configuration and cannot be extrapolated with confidence to alternate shield configurations. Also, current experimental capabilities are limited to velocities which only approach the lower limit of predicted orbital debris velocities. Therefore, prediction of the penetrating particle size for a particular component having a complex geometry remains highly uncertain. An approach is described which was developed to assess on-orbit survivability of the solar dynamic radiator due to micrometeoroid and space debris impacts. Preliminary analyses are presented to quantify the solar dynamic radiator survivability, and include the type of particle and particle population expected to defeat the radiator bumpering (i.e., penetrate a fluid flow tube). Results of preliminary hypervelocity impact testing performed on radiator panel samples (in the 6 to 7 km/sec velocity range) are also presented. Plans for further analyses and testing are discussed. These efforts are expected to lead to a radiator design which will perform to requirements over the expected lifetime.
Approximation Set of the Interval Set in Pawlak's Space
Wang, Jin; Wang, Guoyin
2014-01-01
The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721
NASA Astrophysics Data System (ADS)
Juesas, P.; Ramasso, E.
2016-12-01
Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.
NASA Astrophysics Data System (ADS)
Yu, Wenwu; Cao, Jinde
2007-09-01
Parameter identification of dynamical systems from time series has received increasing interest due to its wide applications in secure communication, pattern recognition, neural networks, and so on. Given the driving system, parameters can be estimated from the time series by using an adaptive control algorithm. Recently, it has been reported that for some stable systems, in which parameters are difficult to be identified [Li et al., Phys Lett. A 333, 269-270 (2004); Remark 5 in Yu and Cao, Physica A 375, 467-482 (2007); and Li et al., Chaos 17, 038101 (2007)], and in this paper, a brief discussion about whether parameters can be identified from time series is investigated. From some detailed analyses, the problem of why parameters of stable systems can be hardly estimated is discussed. Some interesting examples are drawn to verify the proposed analysis.
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
Reason, emotion and decision-making: risk and reward computation with feeling.
Quartz, Steven R
2009-05-01
Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.
Orbiting space debris: Dangers, measurement and mitigation
NASA Astrophysics Data System (ADS)
McNutt, Ross T.
1992-06-01
Space debris is a growing environmental problem. Accumulation of objects in earth orbit threatens space systems through the possibility of collisions and runaway debris multiplication. The amount of debris in orbit is uncertain due to the lack of information on the population of debris between 1 and 10 centimeters diameter. Collisions with debris even smaller than 1 cm can be catastrophic due to the high orbital velocities involved. Research efforts are under way at NASA, United States Space Command and the Air Force Phillips Laboratory to detect and catalog the debris population in near-earth space. Current international and national laws are inadequate to control the proliferation of space debris. Space debris is a serious problem with large economic, military, technical and diplomatic components. Actions need to be taken now to: determine the full extent of the orbital debris problem; accurately predict the future evolution of the debris population; decide the extent of the debris mitigation procedures required; implement these policies on a global basis via an international treaty. Action must be initiated now, before the loss of critical space systems such as the space shuttle or the space station.
Quantifying the value of information for uncertainty reduction in chemical EOR modeling
NASA Astrophysics Data System (ADS)
Leray, Sarah; Yeates, Christopher; Douarche, Frédéric; Roggero, Frédéric
2016-04-01
Reservoir modeling is a powerful tool to assess the technical and economic feasibility of chemical Enhanced Oil Recovery methods such as the joint injection of surfactant and polymer. Laboratory recovery experiments are usually undertaken on cores to understand recovery mechanisms and to estimate properties, that will be further used to build large scale models. To capture the different processes involved in chemical EOR, models are described by a large number of parameters which are basically only partially constrained by recovery experiments and additional characterizations, mainly because of cost and time restrictions or limited representativeness. Among the most uncertain properties, features the surfactant adsorption which cannot be straightforwardly derived from bulk or simplified dynamic measurements (e.g. single phase dynamic adsorption experiments). It is unfortunately critical for the economics of the process. Identifying the most informative observations (e.g. saturation scans, pressure differential, surfactant production, oil recovery) is of primary interest to compensate deficiency of some characterizations and improve models robustness and their predictive capability. Building a consistent set of recovery experiments that will allow to seize recovery mechanisms is critical as well. To address these inverse methodology issues, we create a synthetic numerical model with a well-defined set of parameter values, considered to be our reference case. This choice of model is based on a similar real data set and a broad literature review. It consists of a water-wet sandstone subject to typical surfactant-polymer injections. We first study the effect of a salinity gradient injected after a surfactant-polymer slug, as it is known to significantly improve oil recovery. We show that reaching optimal conditions of salinity gradient is a fragile balance between surfactant desorption and interfacial tension increase. This high dependence on surfactant adsorption properties indicates that two recovery tests with and without salinity gradient are of great interest for model inversion and characterization of surfactant adsorption. Second, we analyze our capacity to find again the reference model using an assisted history matching method to reproduce a set of synthetic core-scale experiments. To do so, we use the reference model over five configurations with respect to chemicals injection to provide baseline recovery data. Then, we consider some uncertainty on model parameters, regarding surfactant adsorption properties amongst others, leading to a total of twelve uncertain parameters. Finally, we extensively explore the parameter space to find several reasonable matches. We show that an additional sixth recovery experiment is necessary to fully constrain the model, and specifically characterize surfactant adsorption. We besides show that production data are not equally informative: pressure differential is for instance the less informative data while a saturation scan at the end of the polymer post-flush can greatly help in the inversion. The inverse methodology carried out here has also been successfully tested with a real set of coreflood experiments.
Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen
2015-01-01
Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...
Challenges of Developing Design Discharge Estimates with Uncertain Data and Information
NASA Astrophysics Data System (ADS)
Senarath, S. U. S.
2016-12-01
This study focuses on design discharge estimates obtained for gauged basins through flood flow frequency analysis. Bulletin 17B (B17B) guidelines are widely used in the USA for developing these design estimates, which are required for many water resources engineering design applications. A set of outlier and historical data, and distribution parameter selection options is included in these guidelines. These options are provided in the guidelines as a means of accounting for uncertain data and information, primarily in the flow record. The individual as well as the cumulative effects of each of these preferences on design discharge estimates are evaluated in this study by using data from several gauges that are part of the United States Geological Survey's Hydro-Climatic Data Network. The results of this study show that despite the availability of rigorous and detailed guidelines for flood frequency analysis, the design discharge estimates can still vary substantially, from user to user, based on data and model parameter selection options chosen by each user. Thus, the findings of this study have strong implications for water resources engineers and other professionals who use B17B-based design discharge estimates in their work.
Zuo, Shan; Song, Y D; Wang, Lei; Song, Qing-wang
2013-01-01
Offshore floating wind turbine (OFWT) has gained increasing attention during the past decade because of the offshore high-quality wind power and complex load environment. The control system is a tradeoff between power tracking and fatigue load reduction in the above-rated wind speed area. In allusion to the external disturbances and uncertain system parameters of OFWT due to the proximity to load centers and strong wave coupling, this paper proposes a computationally inexpensive robust adaptive control approach with memory-based compensation for blade pitch control. The method is tested and compared with a baseline controller and a conventional individual blade pitch controller with the "NREL offshore 5 MW baseline wind turbine" being mounted on a barge platform run on FAST and Matlab/Simulink, operating in the above-rated condition. It is shown that the advanced control approach is not only robust to complex wind and wave disturbances but adaptive to varying and uncertain system parameters as well. The simulation results demonstrate that the proposed method performs better in reducing power fluctuations, fatigue loads and platform vibration as compared to the conventional individual blade pitch control.
Uncertainty Quantification for Polynomial Systems via Bernstein Expansions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ren, Diandong; Karoly, David J.
2008-03-01
Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.
Zuo, Shan; Song, Y. D.; Wang, Lei; Song, Qing-wang
2013-01-01
Offshore floating wind turbine (OFWT) has gained increasing attention during the past decade because of the offshore high-quality wind power and complex load environment. The control system is a tradeoff between power tracking and fatigue load reduction in the above-rated wind speed area. In allusion to the external disturbances and uncertain system parameters of OFWT due to the proximity to load centers and strong wave coupling, this paper proposes a computationally inexpensive robust adaptive control approach with memory-based compensation for blade pitch control. The method is tested and compared with a baseline controller and a conventional individual blade pitch controller with the “NREL offshore 5 MW baseline wind turbine” being mounted on a barge platform run on FAST and Matlab/Simulink, operating in the above-rated condition. It is shown that the advanced control approach is not only robust to complex wind and wave disturbances but adaptive to varying and uncertain system parameters as well. The simulation results demonstrate that the proposed method performs better in reducing power fluctuations, fatigue loads and platform vibration as compared to the conventional individual blade pitch control. PMID:24453834
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
NASA Astrophysics Data System (ADS)
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
Developing and Fielding Information Dominance
2002-01-01
Developing and Fielding Information Dominance Space and Naval Warfare Systems Command’s IT-21 Blocks 1 and 2 2002 Command and Control Research and...00-00-2002 4. TITLE AND SUBTITLE Developing and Fielding Information Dominance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...force levels were uncertain, the necessary role of information dominance to maintaining strategic superiority was not. Platform Centric Warfare, with its
Modal-space reference-model-tracking fuzzy control of earthquake excited structures
NASA Astrophysics Data System (ADS)
Park, Kwan-Soon; Ok, Seung-Yong
2015-01-01
This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.
Almost output regulation of LFT systems via gain-scheduling control
NASA Astrophysics Data System (ADS)
Yuan, Chengzhi; Duan, Chang; Wu, Fen
2018-05-01
Output regulation of general uncertain systems is a meaningful yet challenging problem. In spite of the rich literature in the field, the problem has not yet been addressed adequately due to the lack of an effective design mechanism. In this paper, we propose a new design framework for almost output regulation of uncertain systems described in the general form of linear fractional transformation (LFT) with time-varying parametric uncertainties and unknown external perturbations. A novel semi-LFT gain-scheduling output regulator structure is proposed, such that the associated control synthesis conditions guaranteeing both output regulation and ? disturbance attenuation performance are formulated as a set of linear matrix inequalities (LMIs) plus parameter-dependent linear matrix equations, which can be solved separately. A numerical example has been used to demonstrate the effectiveness of the proposed approach.
Stability of uncertain systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Blankenship, G. L.
1971-01-01
The asymptotic properties of feedback systems are discussed, containing uncertain parameters and subjected to stochastic perturbations. The approach is functional analytic in flavor and thereby avoids the use of Markov techniques and auxiliary Lyapunov functionals characteristic of the existing work in this area. The results are given for the probability distributions of the accessible signals in the system and are proved using the Prohorov theory of the convergence of measures. For general nonlinear systems, a result similar to the small loop-gain theorem of deterministic stability theory is given. Boundedness is a property of the induced distributions of the signals and not the usual notion of boundedness in norm. For the special class of feedback systems formed by the cascade of a white noise, a sector nonlinearity and convolution operator conditions are given to insure the total boundedness of the overall feedback system.
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
Ao, Wei; Song, Yongdong; Wen, Changyun
2017-05-01
In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Effects of uncertain topographic input data on two-dimensional flow modeling in a gravel-bed river
Legleiter, C.J.; Kyriakidis, P.C.; McDonald, R.R.; Nelson, J.M.
2011-01-01
Many applications in river research and management rely upon two-dimensional (2D) numerical models to characterize flow fields, assess habitat conditions, and evaluate channel stability. Predictions from such models are potentially highly uncertain due to the uncertainty associated with the topographic data provided as input. This study used a spatial stochastic simulation strategy to examine the effects of topographic uncertainty on flow modeling. Many, equally likely bed elevation realizations for a simple meander bend were generated and propagated through a typical 2D model to produce distributions of water-surface elevation, depth, velocity, and boundary shear stress at each node of the model's computational grid. Ensemble summary statistics were used to characterize the uncertainty associated with these predictions and to examine the spatial structure of this uncertainty in relation to channel morphology. Simulations conditioned to different data configurations indicated that model predictions became increasingly uncertain as the spacing between surveyed cross sections increased. Model sensitivity to topographic uncertainty was greater for base flow conditions than for a higher, subbankfull flow (75% of bankfull discharge). The degree of sensitivity also varied spatially throughout the bend, with the greatest uncertainty occurring over the point bar where the flow field was influenced by topographic steering effects. Uncertain topography can therefore introduce significant uncertainty to analyses of habitat suitability and bed mobility based on flow model output. In the presence of such uncertainty, the results of these studies are most appropriately represented in probabilistic terms using distributions of model predictions derived from a series of topographic realizations. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Young, D. A.; Grima, C.; Greenbaum, J. S.; Beem, L.; Cavitte, M. G.; Quartini, E.; Kempf, S. D.; Roberts, J. L.; Siegert, M. J.; Ritz, C.; Blankenship, D. D.
2017-12-01
Over the last twenty five years, extensive ice penetrating radar (IPR) coverage of Antarctica has been obtained, at lines spacings down to 1 km in some cases. However, many glacial processes occur at finer scales, so infering likely landscape parameters is required for a useful interpolation between lines. Profile roughness is also important for understanding the uncertainties inherent in IPR observations. Subglacial roughness has also been used to infer large scale bed rock properties and history. Similar work has been conducted on a regional basis with complilations of data from the 1970's and more recent local studies. Here we present a compilation of IPR-derived profile roughness data covering three great basins of Antarctica: the Byrd Subglacial Basin in West Antarctica, and the Wilkes Subglacial Basin and Aurora Subglacial Basins in East Antarctica; and treat these data using root mean squared deviation (RMSD). Coverage is provied by a range of IPR systems with varying vintages with differing instrument and processing parameters; we present approaches to account for the differences between these systems. We use RMSD, a tool commonly used in planetary investigations, to investigate the self-affine behaviour of the bed at kilometer scales and extract fractal parameters from the data to predict roughness and uncertainties in ice thickness measurement. Lastly, we apply a sensor model to a range of bare-earth terrestrial digital elevation models to futher understand the impact of the sensor model on the inference of subglacial topography and roughness, and to the first order analogies for the lithology of the substrate. This map of roughness, at scales between the pulse limited radar footprint and typical line spacings, provides an understanding of the distribution of Paleogene subglacial sediments, insight in to the distribution of uncertainties and a potential basal properties mask for ice sheet models. A particular goal of this map is to provide insight into required IPR coverage needs for site selection for old ice and subglacial samples for subglacial access systems like US-RAID and SUBGLACIOR.
Colorado River basin sensitivity to disturbance impacts
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing, dependent on the strength and direction of the forcing. These results indicate the importance of understanding model sensitivities under disturbance impacts to manage these shifts; plan for future water resource changes and determine how the impacts will affect the sustainability and adaptability of food-energy-water systems.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions
König, Sandra; Schauer, Stefan
2016-01-01
Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572
Fast computation of the multivariable stability margin for real interrelated uncertain parameters
NASA Technical Reports Server (NTRS)
Sideris, Athanasios; Sanchez Pena, Ricardo S.
1988-01-01
A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.
2014-12-01
Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).
Successful Surgical Stabilization of Rib Fractures Despite Candida Colonization of the Mediastinum.
Ju, Tammy; Rivas, Lisbi; Sarani, Babak
2018-04-06
Pleural space or chest wall infection is a contraindication for surgical stabilization of rib fractures (SSRF) due to the risk of hardware infection. However, the exact degree of risk is uncertain. SSRF is associated with decreased need for mechanical ventilation and pneumonia. Here, we describe a poly-trauma patient with candida colonization of the mediastinum who successfully underwent SSRF. Copyright © 2018. Published by Elsevier Inc.
Robust Feedback Control of Reconfigurable Multi-Agent Systems in Uncertain Adversarial Environments
2015-07-09
R. G., Optimal Lunar Landing and Retargeting using a Hybrid Control Strategy. Proceedings of the 2013 AAS/AIAA Space Flight Mechanics Meeting (AAS...Furfaro, R. & Sanfelice, R. G., Switching System Model for Pinpoint Lunar Landing Guidance Using a Hybrid Control Strategy. Proceedings of the AIAA...methods in distributed settings and the design of numerical methods to properly compute their trajectories . We have generate results showing that
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.; Stewart, D. F.; Davis, J. R.
1986-01-01
Space motion sickness clinical characteristics, time course, prediction of susceptibility, and effectiveness of countermeasures were evaluated. Although there is wide individual variability, there appear to be typical patterns of symptom development. The duration of symptoms ranges from several hours to four days with the majority of individuals being symptom free by the end of third day. The etiology of this malady remains uncertain but evidence points to reinterpretation of otolith inputs as being a key factor in the response of the neurovestibular system. Prediction of susceptibility and severity remains unsatisfactory. Countermeasures tried include medications, preflight adaptation, and autogenic feedback training. No countermeasure is entirely successful in eliminating or alleviating symptoms.
Robust stability for stochastic bidirectional associative memory neural networks with time delays
NASA Astrophysics Data System (ADS)
Shu, H. S.; Lv, Z. W.; Wei, G. L.
2008-02-01
In this paper, the asymptotic stability is considered for a class of uncertain stochastic bidirectional associative memory neural networks with time delays and parameter uncertainties. The delays are time-invariant and the uncertainties are norm-bounded that enter into all network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a Lyapunov-Krasovskii functional and conducting the stochastic analysis, a linear matrix inequality matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be easily checked by the Matlab LMI toolbox. A numerical example is given to demonstrate the usefulness of the proposed criteria.
Aerial robot intelligent control method based on back-stepping
NASA Astrophysics Data System (ADS)
Zhou, Jian; Xue, Qian
2018-05-01
The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less
NASA Astrophysics Data System (ADS)
Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter
2017-02-01
It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.
Peng, Zhouhua; Wang, Dan; Zhang, Hongwei; Sun, Gang
2014-08-01
This paper addresses the leader-follower synchronization problem of uncertain dynamical multiagent systems with nonlinear dynamics. Distributed adaptive synchronization controllers are proposed based on the state information of neighboring agents. The control design is developed for both undirected and directed communication topologies without requiring the accurate model of each agent. This result is further extended to the output feedback case where a neighborhood observer is proposed based on relative output information of neighboring agents. Then, distributed observer-based synchronization controllers are derived and a parameter-dependent Riccati inequality is employed to prove the stability. This design has a favorable decouple property between the observer and the controller designs for nonlinear multiagent systems. For both cases, the developed controllers guarantee that the state of each agent synchronizes to that of the leader with bounded residual errors. Two illustrative examples validate the efficacy of the proposed methods.
Applying Bayesian belief networks in rapid response situations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, William L; Deborah, Leishman, A.; Van Eeckhout, Edward
2008-01-01
The authors have developed an enhanced Bayesian analysis tool called the Integrated Knowledge Engine (IKE) for monitoring and surveillance. The enhancements are suited for Rapid Response Situations where decisions must be made based on uncertain and incomplete evidence from many diverse and heterogeneous sources. The enhancements extend the probabilistic results of the traditional Bayesian analysis by (1) better quantifying uncertainty arising from model parameter uncertainty and uncertain evidence, (2) optimizing the collection of evidence to reach conclusions more quickly, and (3) allowing the analyst to determine the influence of the remaining evidence that cannot be obtained in the time allowed.more » These extended features give the analyst and decision maker a better comprehension of the adequacy of the acquired evidence and hence the quality of the hurried decisions. They also describe two example systems where the above features are highlighted.« less
Zahiripour, Seyed Ali; Jalali, Ali Akbar
2014-09-01
A novel switching function based on an optimization strategy for the sliding mode control (SMC) method has been provided for uncertain stochastic systems subject to actuator degradation such that the closed-loop system is globally asymptotically stable with probability one. In the previous researches the focus on sliding surface has been on proportional or proportional-integral function of states. In this research, from a degree of freedom that depends on designer choice is used to meet certain objectives. In the design of the switching function, there is a parameter which the designer can regulate for specified objectives. A sliding-mode controller is synthesized to ensure the reachability of the specified switching surface, despite actuator degradation and uncertainties. Finally, the simulation results demonstrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Hao, Li-Ying; Park, Ju H; Ye, Dan
2017-09-01
In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.
Rendezvous with connectivity preservation for multi-robot systems with an unknown leader
NASA Astrophysics Data System (ADS)
Dong, Yi
2018-02-01
This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.
Fuzzy Adaptive Control Design and Discretization for a Class of Nonlinear Uncertain Systems.
Zhao, Xudong; Shi, Peng; Zheng, Xiaolong
2016-06-01
In this paper, tracking control problems are investigated for a class of uncertain nonlinear systems in lower triangular form. First, a state-feedback controller is designed by using adaptive backstepping technique and the universal approximation ability of fuzzy logic systems. During the design procedure, a developed method with less computation is proposed by constructing one maximum adaptive parameter. Furthermore, adaptive controllers with nonsymmetric dead-zone are also designed for the systems. Then, a sampled-data control scheme is presented to discretize the obtained continuous-time controller by using the forward Euler method. It is shown that both proposed continuous and discrete controllers can ensure that the system output tracks the target signal with a small bounded error and the other closed-loop signals remain bounded. Two simulation examples are presented to verify the effectiveness and applicability of the proposed new design techniques.
Observer-based state tracking control of uncertain stochastic systems via repetitive controller
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Susana Ramya, L.; Selvaraj, P.
2017-08-01
This paper develops the repetitive control scheme for state tracking control of uncertain stochastic time-varying delay systems via equivalent-input-disturbance approach. The main purpose of this work is to design a repetitive controller to guarantee the tracking performance under the effects of unknown disturbances with bounded frequency and parameter variations. Specifically, a new set of linear matrix inequality (LMI)-based conditions is derived based on the suitable Lyapunov-Krasovskii functional theory for designing a repetitive controller which guarantees stability and desired tracking performance. More precisely, an equivalent-input-disturbance estimator is incorporated into the control design to reduce the effect of the external disturbances. Simulation results are provided to demonstrate the desired control system stability and their tracking performance. A practical stream water quality preserving system is also provided to show the effectiveness and advantage of the proposed approach.
NASA Technical Reports Server (NTRS)
Kharisov, Evgeny; Gregory, Irene M.; Cao, Chengyu; Hovakimyan, Naira
2008-01-01
This paper explores application of the L1 adaptive control architecture to a generic flexible Crew Launch Vehicle (CLV). Adaptive control has the potential to improve performance and enhance safety of space vehicles that often operate in very unforgiving and occasionally highly uncertain environments. NASA s development of the next generation space launch vehicles presents an opportunity for adaptive control to contribute to improved performance of this statically unstable vehicle with low damping and low bending frequency flexible dynamics. In this paper, we consider the L1 adaptive output feedback controller to control the low frequency structural modes and propose steps to validate the adaptive controller performance utilizing one of the experimental test flights for the CLV Ares-I Program.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Orbiting space debris: Dangers, measurement, and mitigation
NASA Astrophysics Data System (ADS)
McNutt, Ross T.
1992-01-01
Space debris is a growing environmental problem. Accumulation of objects in Earth orbit threatens space systems through the possibility of collisions and runaway debris multiplication. The amount of debris in orbit is uncertain due to the lack of information on the population of debris between 1 and 10 centimeters diameter. Collisions with debris even smaller than 1 cm can be catastrophic due to the high orbital velocities involved. Research efforts are under way at NASA, Unites States Space Command and the Air Force Phillips Laboratory to detect and catalog the debris population in near-Earth space. Current international and national laws are inadequate to control the proliferation of space debris. Space debris is a serious problem with large economic, military, technical, and diplomatic components. Actions need to be taken now for the following reasons: determine the full extent of the orbital debris problem; accurately predict the future evolution of the debris population; decide the extent of the debris mitigation procedures required; implement these policies on a global basis via an international treaty. Action must be initiated now, before the the loss of critical space systems such as the Space Shuttle or the Space Station.
The Evolution and Physical Parameters of WN3/O3s: A New Type of Wolf-Rayet Star
NASA Astrophysics Data System (ADS)
Neugent, Kathryn F.; Massey, Philip; Hillier, D. John; Morrell, Nidia
2017-05-01
As part of a search for Wolf-Rayet (WR) stars in the Magellanic Clouds, we have discovered a new type of WR star in the Large Magellanic Cloud (LMC). These stars have both strong emission lines, as well as He II and Balmer absorption lines and spectroscopically resemble a WN3 and O3V binary pair. However, they are visually too faint to be WN3+O3V binary systems. We have found nine of these WN3/O3s, making up ˜6% of the population of LMC WRs. Using cmfgen, we have successfully modeled their spectra as single stars and have compared the physical parameters with those of more typical LMC WNs. Their temperatures are around 100,000 K, a bit hotter than the majority of WN stars (by around 10,000 K), though a few hotter WNs are known. The abundances are what you would expect for CNO equilibrium. However, most anomalous are their mass-loss rates, which are more like that of an O-type star than a WN star. While their evolutionary status is uncertain, their low mass-loss rates and wind velocities suggest that they are not products of homogeneous evolution. It is possible instead that these stars represent an intermediate stage between O stars and WNs. Since WN3/O3 stars are unknown in the Milky Way, we suspect that their formation depends upon metallicity, and we are investigating this further by a deep survey in M33, which possesses a metallicity gradient. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. It is additionally based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations were associated with program GO-13780.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
NASA Astrophysics Data System (ADS)
Yorks, J. E.; McGill, M. J.; Nowottnick, E. P.; Palm, S. P.; Hlavka, D. L.; Selmer, P. A.; Rodier, S. D.; Vaughan, M.; Pauly, R.
2017-12-01
The Cloud-Aerosol Transport System (CATS) is an elastic backscatter lidar that has generated over 175 billion laser pulses on-orbit from the International Space Station (ISS) since February 2015. The CATS instrument was designed to demonstrate new in-space technologies for future Earth Science missions while also providing properties of clouds and aerosols such as: layer height/thickness, backscatter, optical depth, extinction, and feature type. Despite the "tech demo" nature of CATS and the lack of a funded science team, the research community is increasingly embracing CATS data. New CATS data products, the most acurrate yet, were released in the summer of 2017. The major algorithm changes made in L1B Version 2-08 (V2-08) focused on the backscatter calibration and the inclusion of a new flag to notify users of granules with depolarization ratio values of poor quality. Several changes were made to the molecular folding correction factor and calibration algorithms that result in favorable comparisons between CATS, CALIPSO, and modeled Rayleigh 1064 nm backscatter profiles. Given that the 1064 nm attenuated total backscatter and depolarization ratio are used to retrieve nearly all L2O data products, the accuracy of the L2O products has also improved. Several changes were made in CATS L2O Version 2-00 data products to improve cloud and aerosol detection. The CATS L2O data now includes layer detection at both 5 and 60 km horizontal resolutions to increase daytime detection of thin cirrus and aerosol layers over land. Horizontal persistence tests prevent superficial "striping" that was visible in vertical feature mask images for horizontally homogeneous cloud and aerosol layers. Also, the absolute uncertainties for all the L2O parameters are now reported in the CATS data products. Given the uncertain status of continued CALIPSO operations, these updated CATS data products may be the only space-based lidar data record that continues into the 2018 timeframe.
Generalized Distributed Consensus-based Algorithms for Uncertain Systems and Networks
2010-01-01
time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4):1165– 1191, 2005... time marko- vian jump linear systems , in the presence of delayed mode observations. Proceed- ings of the 2008 IEEE American Control Conference, pages...Markovian Jump Linear System state estimation . . . . 147 6 Conclusions 152 A Discrete- Time Coupled Matrix Equations 156 A.1 Properties of a special
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
Optimal quantum cloning based on the maximin principle by using a priori information
NASA Astrophysics Data System (ADS)
Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming
2016-10-01
We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.
Two-stage fuzzy-stochastic robust programming: a hybrid model for regional air quality management.
Li, Yongping; Huang, Guo H; Veawab, Amornvadee; Nie, Xianghui; Liu, Lei
2006-08-01
In this study, a hybrid two-stage fuzzy-stochastic robust programming (TFSRP) model is developed and applied to the planning of an air-quality management system. As an extension of existing fuzzy-robust programming and two-stage stochastic programming methods, the TFSRP can explicitly address complexities and uncertainties of the study system without unrealistic simplifications. Uncertain parameters can be expressed as probability density and/or fuzzy membership functions, such that robustness of the optimization efforts can be enhanced. Moreover, economic penalties as corrective measures against any infeasibilities arising from the uncertainties are taken into account. This method can, thus, provide a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken. In its solution algorithm, the fuzzy decision space can be delimited through specification of the uncertainties using dimensional enlargement of the original fuzzy constraints. The developed model is applied to a case study of regional air quality management. The results indicate that reasonable solutions have been obtained. The solutions can be used for further generating pollution-mitigation alternatives with minimized system costs and for providing a more solid support for sound environmental decisions.
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
NASA Astrophysics Data System (ADS)
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, Poshak; Hönig, Sebastian F.; Kishimoto, Makoto
2015-10-20
The Fe Kα emission line is the most ubiquitous feature in the X-ray spectra of active galactic nuclei (AGNs), but the origin of its narrow core remains uncertain. Here, we investigate the connection between the sizes of the Fe Kα core emission regions and the measured sizes of the dusty tori in 13 local Type 1 AGNs. The observed Fe Kα emission radii (R{sub Fe}) are determined from spectrally resolved line widths in X-ray grating spectra, and the dust sublimation radii (R{sub dust}) are measured either from optical/near-infrared (NIR) reverberation time lags or from resolved NIR interferometric data. This directmore » comparison shows, on an object-by-object basis, that the dust sublimation radius forms an outer envelope to the bulk of the Fe Kα emission. R{sub Fe} matches R{sub dust} well in the AGNs, with the best constrained line widths currently. In a significant fraction of objects without a clear narrow line core, R{sub Fe} is similar to, or smaller than, the radius of the optical broad line region. These facts place important constraints on the torus geometries for our sample. Extended tori in which the solid angle of fluorescing gas peaks at well beyond the dust sublimation radius can be ruled out. We also test for luminosity scalings of R{sub Fe}, finding that the Eddington ratio is not a prime driver in determining the line location in our sample. We also discuss in detail potential caveats of data analysis and instrumental limitations, simplistic line modeling, uncertain black hole masses, and sample selection, showing that none of these is likely to bias our core result. The calorimeter on board Astro-H will soon vastly increase the parameter space over which line measurements can be made, overcoming many of these limitations.« less
NASA Astrophysics Data System (ADS)
Pyt'ev, Yu. P.
2018-01-01
mathematical formalism for subjective modeling, based on modelling of uncertainty, reflecting unreliability of subjective information and fuzziness that is common for its content. The model of subjective judgments on values of an unknown parameter x ∈ X of the model M( x) of a research object is defined by the researcher-modeler as a space1 ( X, p( X), P{I^{\\bar x}}, Be{l^{\\bar x}}) with plausibility P{I^{\\bar x}} and believability Be{l^{\\bar x}} measures, where x is an uncertain element taking values in X that models researcher—modeler's uncertain propositions about an unknown x ∈ X, measures P{I^{\\bar x}}, Be{l^{\\bar x}} model modalities of a researcher-modeler's subjective judgments on the validity of each x ∈ X: the value of P{I^{\\bar x}}(\\tilde x = x) determines how relatively plausible, in his opinion, the equality (\\tilde x = x) is, while the value of Be{l^{\\bar x}}(\\tilde x = x) determines how the inequality (\\tilde x = x) should be relatively believed in. Versions of plausibility Pl and believability Bel measures and pl- and bel-integrals that inherit some traits of probabilities, psychophysics and take into account interests of researcher-modeler groups are considered. It is shown that the mathematical formalism of subjective modeling, unlike "standard" mathematical modeling, •enables a researcher-modeler to model both precise formalized knowledge and non-formalized unreliable knowledge, from complete ignorance to precise knowledge of the model of a research object, to calculate relative plausibilities and believabilities of any features of a research object that are specified by its subjective model M(\\tilde x), and if the data on observations of a research object is available, then it: •enables him to estimate the adequacy of subjective model to the research objective, to correct it by combining subjective ideas and the observation data after testing their consistency, and, finally, to empirically recover the model of a research object.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Design of a new high-performance pointing controller for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1993-01-01
A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.
Remote Sensing Assessment of Lunar Resources: We Know Where to Go to Find What We Need
NASA Technical Reports Server (NTRS)
Gillis, J. J.; Taylor, G. J.; Lucey, P. G.
2004-01-01
The utilization of space resources is necessary to not only foster the growth of human activities in space, but is essential to the President s vision of a "sustained and affordable human and robotic program to explore the solar system and beyond." The distribution of resources will shape planning permanent settlements by affecting decisions about where to locate a settlement. Mapping the location of such resources, however, is not the limiting factor in selecting a site for a lunar base. It is indecision about which resources to use that leaves the location uncertain. A wealth of remotely sensed data exists that can be used to identify targets for future detailed exploration. Thus, the future of space resource utilization pre-dominantly rests upon developing a strategy for resource exploration and efficient methods of extraction.
Gauging Metallicity of Diffuse Gas under an Uncertain Ionizing Radiation Field
NASA Astrophysics Data System (ADS)
Chen, Hsiao-Wen; Johnson, Sean D.; Zahedy, Fakhri S.; Rauch, Michael; Mulchaey, John S.
2017-06-01
Gas metallicity is a key quantity used to determine the physical conditions of gaseous clouds in a wide range of astronomical environments, including interstellar and intergalactic space. In particular, considerable effort in circumgalactic medium (CGM) studies focuses on metallicity measurements because gas metallicity serves as a critical discriminator for whether the observed heavy ions in the CGM originate in chemically enriched outflows or in more chemically pristine gas accreted from the intergalactic medium. However, because the gas is ionized, a necessary first step in determining CGM metallicity is to constrain the ionization state of the gas which, in addition to gas density, depends on the ultraviolet background radiation field (UVB). While it is generally acknowledged that both the intensity and spectral slope of the UVB are uncertain, the impact of an uncertain spectral slope has not been properly addressed in the literature. This Letter shows that adopting a different spectral slope can result in an order of magnitude difference in the inferred CGM metallicity. Specifically, a harder UVB spectrum leads to a higher estimated gas metallicity for a given set of observed ionic column densities. Therefore, such systematic uncertainties must be folded into the error budget for metallicity estimates of ionized gas. An initial study shows that empirical diagnostics are available for discriminating between hard and soft ionizing spectra. Applying these diagnostics helps reduce the systematic uncertainties in CGM metallicity estimates.
Assimilating uncertain, dynamic and intermittent streamflow observations in hydrological models
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon-Hurtado, Juan; Solomatine, Dimitri
2015-09-01
Catastrophic floods cause significant socio-economical losses. Non-structural measures, such as real-time flood forecasting, can potentially reduce flood risk. To this end, data assimilation methods have been used to improve flood forecasts by integrating static ground observations, and in some cases also remote sensing observations, within water models. Current hydrologic and hydraulic research works consider assimilation of observations coming from traditional, static sensors. At the same time, low-cost, mobile sensors and mobile communication devices are becoming also increasingly available. The main goal and innovation of this study is to demonstrate the usefulness of assimilating uncertain streamflow observations that are dynamic in space and intermittent in time in the context of two different semi-distributed hydrological model structures. The developed method is applied to the Brue basin, where the dynamic observations are imitated by the synthetic observations of discharge. The results of this study show how model structures and sensors locations affect in different ways the assimilation of streamflow observations. In addition, it proves how assimilation of such uncertain observations from dynamic sensors can provide model improvements similar to those of streamflow observations coming from a non-optimal network of static physical sensors. This can be a potential application of recent efforts to build citizen observatories of water, which can make the citizens an active part in information capturing, evaluation and communication, helping simultaneously to improvement of model-based flood forecasting.
Gauging Metallicity of Diffuse Gas under an Uncertain Ionizing Radiation Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsiao-Wen; Zahedy, Fakhri S.; Johnson, Sean D.
Gas metallicity is a key quantity used to determine the physical conditions of gaseous clouds in a wide range of astronomical environments, including interstellar and intergalactic space. In particular, considerable effort in circumgalactic medium (CGM) studies focuses on metallicity measurements because gas metallicity serves as a critical discriminator for whether the observed heavy ions in the CGM originate in chemically enriched outflows or in more chemically pristine gas accreted from the intergalactic medium. However, because the gas is ionized, a necessary first step in determining CGM metallicity is to constrain the ionization state of the gas which, in addition tomore » gas density, depends on the ultraviolet background radiation field (UVB). While it is generally acknowledged that both the intensity and spectral slope of the UVB are uncertain, the impact of an uncertain spectral slope has not been properly addressed in the literature. This Letter shows that adopting a different spectral slope can result in an order of magnitude difference in the inferred CGM metallicity. Specifically, a harder UVB spectrum leads to a higher estimated gas metallicity for a given set of observed ionic column densities. Therefore, such systematic uncertainties must be folded into the error budget for metallicity estimates of ionized gas. An initial study shows that empirical diagnostics are available for discriminating between hard and soft ionizing spectra. Applying these diagnostics helps reduce the systematic uncertainties in CGM metallicity estimates.« less
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.
Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A
2001-01-01
This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Adjoint-Based Climate Model Tuning: Application to the Planet Simulator
NASA Astrophysics Data System (ADS)
Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef
2018-01-01
The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.
Variational Assimilation of Sparse and Uncertain Satellite Data For 1D Saint-Venant River Models
NASA Astrophysics Data System (ADS)
Garambois, P. A.; Brisset, P.; Monnier, J.; Roux, H.
2016-12-01
Profusion of satellites are providing increasingly accurate measurements of continental water cyle, and water bodies variations while in situ observability is declining. The future Surface Water and Ocean Topography (SWOT) mission will provide maps of river surface elevations widths and slopes with an almost global coverage and temporal revisits. This will offer the possibility to address a larger variety of inverse problems in surface hydrology. Data assimilation techniques, that are broadly used in several scientific fields, aim to optimally combine models, system observations and prior information. Variational assimilation consists in iterative minimization of a discrepency measure between model outputs and observations, here for retrieving boundary conditions and parameters of a 1D Saint Venant model. Nevertheless, inferring river discharge and hydraulic parameters thanks to the observation of river surface is not straightforward. This is particularly true in the case of sparse and uncertain observations of flow state variables since they are governed by nonlinear physical processes. This paper investigates the identifiability of hydraulic controls given sparse and uncertain satellite observations of a river. The identifiability of river discharge alone and with roughness is tested for several spatio temporal patterns of river observations, including SWOT like observations. A new 1D Shallow water model with variational data assimilation, within the DassFlow chain is presented as well as postprocessing and observation operator dedicated to the future SWOT and SWOT simulator data. In view to decrease inverse problem dimensionality discharge is represented in a reduced basis. Moreover we introduce an original and reduced parametrization of the flow resistance that can account for various flow regimes along with a cross section design dedicated to remote sensing. We show which discharge temporal frequencies can be identified w.r.t observation ones and at which accuracy. Eventually the important question of the discharge identifiability potential between observation times and depending on the spatio-temporal sampling is adressed with respect to the wave lengths of the hydrological signals.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2018-03-01
Chaos control and synchronization of chaotic systems is seemingly a challenging problem and has got a lot of attention in recent years due to its numerous applications in science and industry. This paper concentrates on the control and synchronization problem of the three-dimensional (3D) Zhang chaotic system. At first, an adaptive control law and a parameter estimation law are achieved for controlling the behavior of the Zhang chaotic system. Then, non-identical synchronization of Zhang chaotic system is provided with considering the Lü chaotic system as the follower system. The synchronization problem and parameters identification are achieved by introducing an adaptive control law and a parameters estimation law. Stability analysis of the proposed method is proved by the Lyapanov stability theorem. In addition, the convergence of the estimated parameters to their truly unknown values are evaluated. Finally, some numerical simulations are carried out to illustrate and to validate the effectiveness of the suggested method.
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
The Effects of the Previous Outcome on Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2014-01-01
This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Elshall, A. S.; Hanor, J. S.
2012-12-01
Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis provided insights on uncertainty priorities and propagation. In addition, it assisted in evaluating the relative importance of competing modeling propositions for each uncertain model component. By being able to dissect the uncertain model components and provide weighted representation of the competing propositions for each uncertain model component based on the background knowledge, the HBMA functions as an epistemic framework for advancing knowledge about the system under study.
Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply.
Li, Kebai; Ma, Tianyi; Wei, Guo
2018-03-31
As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems.
Robust Economic Control Decision Method of Uncertain System on Urban Domestic Water Supply
Li, Kebai; Ma, Tianyi; Wei, Guo
2018-01-01
As China quickly urbanizes, urban domestic water generally presents the circumstances of both rising tendency and seasonal cycle fluctuation. A robust economic control decision method for dynamic uncertain systems is proposed in this paper. It is developed based on the internal model principle and pole allocation method, and it is applied to an urban domestic water supply system with rising tendency and seasonal cycle fluctuation. To achieve this goal, first a multiplicative model is used to describe the urban domestic water demand. Then, a capital stock and a labor stock are selected as the state vector, and the investment and labor are designed as the control vector. Next, the compensator subsystem is devised in light of the internal model principle. Finally, by using the state feedback control strategy and pole allocation method, the multivariable robust economic control decision method is implemented. The implementation with this model can accomplish the urban domestic water supply control goal, with the robustness for the variation of parameters. The methodology presented in this study may be applied to the water management system in other parts of the world, provided all data used in this study are available. The robust control decision method in this paper is also applicable to deal with tracking control problems as well as stabilization control problems of other general dynamic uncertain systems. PMID:29614749
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E
2018-07-01
In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Government and Public Awareness of Space Weather
NASA Astrophysics Data System (ADS)
Lanzerotti, Louis J.
2011-07-01
Solar cycle 24 continues to provide confusion in its start and its unsteady rise toward an uncertain maximum. Nevertheless, many entities, including the popular press and influential government agencies, are becoming more aware of the effects of the Sun and the near-Earth space environment on essential modern-day technologies. Within the past 6 months, news articles in the printed and digital press have included such headlines as "Solar storm delivers glancing blow to Earth—and a warning" (Christian Science Monitor, 9 June 2011), "Magnetic north pole shifts, forces runway closures at Florida airport" (http://FoxNews.com, 6 January 2011), "Major solar flare erupts, may make auroras visible in northern U.S." (SPACE.com, 10 March 2011, but picked up by FoxNews.com and Yahoo News), and "As the sun awakens, the power grid stands vulnerable" (Washington Post, 20 June 2011). All such news stories for the general public are a welcome recognition that weather in space can have important implications for human activities, including the performance—and even survivability—of some technologies.
Tradeoff studies in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1983-01-01
A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Olsson, Oliver; Höper, Heinrich; Hamscher, Gerd; Kümmerer, Klaus
2016-04-01
The simulation of reactive transport in the aquatic environment is hampered by the ambiguity of environmental fate process conceptualizations for a specific substance in the literature. Concepts are usually identified by experimental studies and inverse modelling under controlled lab conditions in order to reduce environmental uncertainties such as uncertain boundary conditions and input data. However, since environmental conditions affect substance behaviour, a re-evaluation might be necessary under environmental conditions which might, in turn, be affected by uncertainties. Using a combination of experimental data and simulations of the leaching behaviour of the veterinary antibiotic Sulfamethazine (SMZ; synonym: sulfadimidine) and the hydrological tracer Bromide (Br) in a field lysimeter, we re-evaluated the sorption concepts of both substances under uncertain field conditions. Sampling data of a field lysimeter experiment in which both substances were applied twice a year with manure and sampled at the bottom of two lysimeters during three subsequent years was used for model set-up and evaluation. The total amount of leached SMZ and Br were 22 μg and 129 mg, respectively. A reactive transport model was parameterized to the conditions of the two lysimeters filled with monoliths (depth 2 m, area 1 m²) of a sandy soil showing a low pH value under which Bromide is sorptive. We used different sorption concepts such as constant and organic-carbon dependent sorption coefficients and instantaneous and kinetic sorption equilibrium. Combining the sorption concepts resulted in four scenarios per substance with different equations for sorption equilibrium and sorption kinetics. The GLUE (Generalized Likelihood Uncertainty Estimation) method was applied to each scenario using parameter ranges found in experimental and modelling studies. The parameter spaces for each scenario were sampled using a Latin Hypercube method which was refined around local model efficiency maxima. Results of the cumulative SMZ leaching simulations suggest a best conceptualization combination of instantaneous sorption to organic carbon which is consistent with the literature. The best Nash-Sutcliffe efficiency (Neff) was 0.96 and the 5th and 95th percentile of the uncertainty estimation were 18 and 27 μg. In contrast, both scenarios of kinetic Br sorption had similar results (Neff =0.99, uncertainty bounds 110-176 mg and 112-176 mg) but were clearly better than instantaneous sorption scenarios. Therefore, only the concept of sorption kinetics could be identified for Br modelling whereas both tested sorption equilibrium coefficient concepts performed equally well. The reasons for this specific case of equifinality may be uncertainties of model input data under field conditions or an insensitivity of the sorption equilibrium method due to relatively low adsorption of Br. Our results show that it may be possible to identify or at least falsify specific sorption concepts under uncertain field conditions using a long-term leaching experiment and modelling methods. Cases of environmental fate concept equifinality arouse the possibility of future model structure uncertainty analysis using an ensemble of models with different environmental fate concepts.
Feedback system design with an uncertain plant
NASA Technical Reports Server (NTRS)
Milich, D.; Valavani, L.; Athans, M.
1986-01-01
A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
Robust Stability and Control of Multi-Body Ground Vehicles with Uncertain Dynamics and Failures
2010-01-01
and N. Zhang, 2008. “Robust stability control of vehicle rollover subject to actuator time delay”. Proc. IMechE Part I: J. of systems and control ...Dynamic Systems and Control Conference, Boston, MA, Sept 2010 R.K. Yedavalli,”Robust Stability of Linear Interval Parameter Matrix Family Problem...for control coupled output regulation for a class of systems is presented. In section 2.1.7, the control design algorithm developed in section
NASA Astrophysics Data System (ADS)
Bates, Matthew E.; Keisler, Jeffrey M.; Zussblatt, Niels P.; Plourde, Kenton J.; Wender, Ben A.; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis—methods commonly applied in financial and operations management—to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios—combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Choi, Yun Ho; Yoo, Sung Jin
2017-03-28
A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.
Bates, Matthew E; Keisler, Jeffrey M; Zussblatt, Niels P; Plourde, Kenton J; Wender, Ben A; Linkov, Igor
2016-02-01
Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis-methods commonly applied in financial and operations management-to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios-combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.
NASA Astrophysics Data System (ADS)
Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun
2018-03-01
This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.
Novel neural control for a class of uncertain pure-feedback systems.
Shen, Qikun; Shi, Peng; Zhang, Tianping; Lim, Cheng-Chew
2014-04-01
This paper is concerned with the problem of adaptive neural tracking control for a class of uncertain pure-feedback nonlinear systems. Using the implicit function theorem and backstepping technique, a practical robust adaptive neural control scheme is proposed to guarantee that the tracking error converges to an adjusted neighborhood of the origin by choosing appropriate design parameters. In contrast to conventional Lyapunov-based design techniques, an alternative Lyapunov function is constructed for the development of control law and learning algorithms. Differing from the existing results in the literature, the control scheme does not need to compute the derivatives of virtual control signals at each step in backstepping design procedures. Furthermore, the scheme requires the desired trajectory and its first derivative rather than its first n derivatives. In addition, the useful property of the basis function of the radial basis function, which will be used in control design, is explored. Simulation results illustrate the effectiveness of the proposed techniques.
Application of a predictive Bayesian model to environmental accounting.
Anex, R P; Englehardt, J D
2001-03-30
Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.
Li, Yongming; Tong, Shaocheng
2017-06-28
In this paper, an adaptive neural networks (NNs)-based decentralized control scheme with the prescribed performance is proposed for uncertain switched nonstrict-feedback interconnected nonlinear systems. It is assumed that nonlinear interconnected terms and nonlinear functions of the concerned systems are unknown, and also the switching signals are unknown and arbitrary. A linear state estimator is constructed to solve the problem of unmeasured states. The NNs are employed to approximate unknown interconnected terms and nonlinear functions. A new output feedback decentralized control scheme is developed by using the adaptive backstepping design technique. The control design problem of nonlinear interconnected switched systems with unknown switching signals can be solved by the proposed scheme, and only a tuning parameter is needed for each subsystem. The proposed scheme can ensure that all variables of the control systems are semi-globally uniformly ultimately bounded and the tracking errors converge to a small residual set with the prescribed performance bound. The effectiveness of the proposed control approach is verified by some simulation results.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
The Impact of Uncertain Physical Parameters on HVAC Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai
HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less
Value-informed space systems design and acquisition
NASA Astrophysics Data System (ADS)
Brathwaite, Joy
Investments in space systems are substantial, indivisible, and irreversible, characteristics that make them high-risk, especially when coupled with an uncertain demand environment. Traditional approaches to system design and acquisition, derived from a performance- or cost-centric mindset, incorporate little information about the spacecraft in relation to its environment and its value to its stakeholders. These traditional approaches, while appropriate in stable environments, are ill-suited for the current, distinctly uncertain, and rapidly changing technical and economic conditions; as such, they have to be revisited and adapted to the present context. This thesis proposes that in uncertain environments, decision-making with respect to space system design and acquisition should be value-based, or at a minimum value-informed. This research advances the value-centric paradigm by providing the theoretical basis, foundational frameworks, and supporting analytical tools for value assessment of priced and unpriced space systems. For priced systems, stochastic models of the market environment and financial models of stakeholder preferences are developed and integrated with a spacecraft-sizing tool to assess the system's net present value. The analytical framework is applied to a case study of a communications satellite, with market, financial, and technical data obtained from the satellite operator, Intelsat. The case study investigates the implications of the value-centric versus the cost-centric design and acquisition choices. Results identify the ways in which value-optimal spacecraft design choices are contingent on both technical and market conditions, and that larger spacecraft for example, which reap economies of scale benefits, as reflected by their decreasing cost-per-transponder, are not always the best (most valuable) choices. Market conditions and technical constraints for which convergence occurs between design choices under a cost-centric and a value-centric approach are identified and discussed. In addition, an innovative approach for characterizing value uncertainty through partial moments, a technique used in finance, is adapted to an engineering context and applied to priced space systems. Partial moments disaggregate uncertainty into upside potential and downside risk, and as such, they provide the decision-maker with additional insights for value-uncertainty management in design and acquisition. For unpriced space systems, this research first posits that their value derives from, and can be assessed through, the value of information they provide. To this effect, a Bayesian framework is created to assess system value in which the system is viewed as an information provider and the stakeholder an information recipient. Information has value to stakeholders as it changes their rational beliefs enabling them to yield higher expected pay-offs. Based on this marginal increase in expected pay-offs, a new metric, Value-of-Design (VoD), is introduced to quantify the unpriced system’s value. The Bayesian framework is applied to the case of an Earth Science satellite that provides hurricane information to oil rig operators using nested Monte Carlo modeling and simulation. Probability models of stakeholders’ beliefs, and economic models of pay-offs are developed and integrated with a spacecraft payload generation tool. The case study investigates the information value generated by each payload, with results pointing to clusters of payload instruments that yielded higher information value, and minimum information thresholds below which it is difficult to justify the acquisition of the system. In addition, an analytical decision tool, probabilistic Pareto fronts, is developed in the Cost-VoD trade space to provide the decision-maker with additional insights into the coupling of a system's probable value generation and its associated cost risk.
Spatial education: improving conservation delivery through space-structured decision making
Moore, Clinton T.; Shaffer, Terry L.; Gannon, Jill J.
2013-01-01
Adaptive management is a form of structured decision making designed to guide management of natural resource systems when their behaviors are uncertain. Where decision making can be replicated across units of a landscape, learning can be accelerated, and biological processes can be understood in a larger spatial context. Broad-based partnerships among land management agencies, exemplified by Landscape Conservation Cooperatives (conservation partnerships created through the U.S. Department of the Interior), are potentially ideal environments for implementing spatially structured adaptive management programs.
OPUS: Optimal Projection for Uncertain Systems
1988-10-01
November 1986. £ 50. D. C. Hyland, "An Experimental Testbed for Validation of Control Methodologies in Large Space Optical Structures," in Structural...supponed by theDepar’nent of the Air Force and %*s perhorrod at Lincoln Lihoratry. M I TTeauthors are with thu Control % Anal)sis andJ Synthesis Group . Hams...assumption that (Ac, B,. Q~ is controllable and 0=(, CQ*Q (A+B Q-Q* +B ~ observable. Remark 2.3: Since CAis nonnegative semidsimple it has a group
Grasp planning under uncertainty
NASA Technical Reports Server (NTRS)
Erkmen, A. M.; Stephanou, H. E.
1989-01-01
The planning of dexterous grasps for multifingered robot hands operating in uncertain environments is covered. A sensor-based approach to the planning of a reach path prior to grasping is first described. An on-line, joint space finger path planning algorithm for the enclose phase of grasping was then developed. The algorithm minimizes the impact momentum of the hand. It uses a Preshape Jacobian matrix to map task-level hand preshape requirements into kinematic constraints. A master slave scheme avoids inter-finger collisions and reduces the dimensionality of the planning problem.
TROTER's (Tiny Robotic Operation Team Experiment): A new concept of space robots
NASA Technical Reports Server (NTRS)
Su, Renjeng
1990-01-01
In view of the future need of automation and robotics in space and the existing approaches to the problem, we proposed a new concept of robots for space construction. The new concept is based on the basic idea of decentralization. Decentralization occurs, on the one hand, in using teams of many cooperative robots for construction tasks. Redundancy and modular design are explored to achieve high reliability for team robotic operations. Reliability requirement on individual robots is greatly reduced. Another area of decentralization is manifested by the proposed control hierarchy which eventually includes humans in the loop. The control strategy is constrained by various time delays and calls for different levels of abstraction of the task dynamics. Such technology is needed for remote control of robots in an uncertain environment. Thus, concerns of human safety around robots are relaxed. This presentation also introduces the required technologies behind the new robotic concept.
A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration
NASA Technical Reports Server (NTRS)
Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce
2008-01-01
Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.
Space Adaptation Back Pain: A Retrospective Study
NASA Technical Reports Server (NTRS)
Kerstman, Eric
2009-01-01
Astronaut back pain is frequently reported in the early phase of space flight as they adapt to microgravity. The epidemiology of space adaptation back pain (SABP) has not been well established. This presentation seeks to determine the exact incidence of SABP among astronauts, develop a case definition of SABP, delineate the nature and pattern of SABP, review available treatments and their effectiveness in relieving SABP; and identify any operational impact of SABP. A retrospective review of all available mission medical records of astronauts in the U.S. space program was performed. It was revealed that the incidence of SABP has been determined to be 53% among astronauts in the U.S. space program; most cases of SABP are mild, self-limited, or respond to available treatment; there are no currently accepted preventive measures for SABP; it is difficult to predict who will develop SABP; the precise mechanism and spinal structures responsible for SABP are uncertain; there was no documented evidence of direction operational mission impact related to SABP; and, that there was the potential for mission impact related to uncontrolled pain, sleep disturbance, or the adverse side effects pf anti-inflammatory medications
Experimental gravitation in space - Is there a future?
NASA Astrophysics Data System (ADS)
Wharton, R. A.; McKay, C. P.; Mancinelli, R. L.; Simmons, G. M.
Experimental gravitation enters the 1990s with a past full of successes, but with a future full of uncertainties. Intellectually, the field is as vigorous as ever, with major thrusts in three main areas: the search for gravitational radiation, the study of post and post-post Newtonian effects, and the detection of hypothetical feeble new interactions. It is the only branch of space research involved in fundamental physics. But politically and financially, the future is uncertain. Competition for funding and for flight opportunities will be stiff for the foreseeable future, both with other disciplines such as astrophysics, planetary science and the military, and within experimental gravitation itself. Difficult choices lie ahead. This paper reviews the current state of the field and attempts to peer into the future.
White blood cell segmentation by color-space-based k-means clustering.
Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi
2014-09-01
White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Adaptive Neural Networks for Automatic Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakas, D. P.; Vlachos, D. S.; Simos, T. E.
The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.
Sagittal balance, a useful tool for neurosurgeons?
Villard, Jimmy; Ringel, Florian; Meyer, Bernhard
2014-01-01
New instrumentation techniques have made any correction of the spinal architecture possible. Sagittal balance has been described as an important parameter for assessing spinal deformity in the early 1970s, but over the last decade its importance has grown with the published results in terms of overall quality of life and fusion rate. Up until now, most of the studies have concentrated on spinal deformity surgery, but its use in the daily neurosurgery practice remains uncertain and may warrant further studies.
New explanation of Raman peak redshift in nanoparticles
NASA Astrophysics Data System (ADS)
Meilakhs, A. P.; Koniakhin, S. V.
2017-10-01
In this letter, we propose a new model that explains the Raman peak downshift observed in nanoparticles with respect to bulk materials. The proposed model takes into account discreteness of the vibrational spectra of nanoparticles. For crystals with a cubic lattice (Diamond, Silicon, Germanium) we give a relation between the displacement of Raman peak position and the size of nanoparticles. The proposed model does not include any uncertain parameters, unlike the conventionally used phonon confinement model (PCM), and can be employed for unambiguous nanoparticles size estimation.
Mechanical Properties of EPON 826/DEA Epoxy
2008-07-26
Eβ ( ε̇− ε̇pβ ) . (5b) Equations (2) and (5) are solved simultaneously as a system of time-dependant differential equations to determine the stress in...this implies that estimates of these underlying physical parameters are highly uncertain but also has a weak effect on the stress -strain relationships...20), 4923–4928 (1998) Chou, S.C., Robertson, K.D., et al.: The effect of strain rate and heat developed during deformation on the stress -strain curve
A Survey of Probabilistic Methods for Dynamical Systems with Uncertain Parameters.
1986-05-01
J., "An Approach to the Theoretical Background of Statistical Energy Analysis Applied to Structural Vibration," Journ. Acoust. Soc. Amer., Vol. 69...1973, Sect. 8.3. 80. Lyon, R.H., " Statistical Energy Analysis of Dynamical Systems," M.I.T. Press, 1975. e) Late References added in Proofreading !! 81...Dowell, E.H., and Kubota, Y., "Asymptotic Modal Analysis and ’~ y C-" -165- Statistical Energy Analysis of Dynamical Systems," Journ. Appi. - Mech
Similitude relations for buffet and wing rock on delta wings
NASA Astrophysics Data System (ADS)
Mabey, D. G.
1997-08-01
Vortex flow phenomena at high angles of incidence are of great interest to the designers of advanced combat aircraft. The steady phenomena (such as steady lift and pitching moments) are understood fairly well, whereas the unsteady phenomena are still uncertain. This paper addresses two important unsteady phenomena on delta wings. With regard to the frequency parameter of the quasi-periodic excitation caused by vortex bursting, a new correlation is established covering a range of sweep back from 60 to 75°. With regard to the much lower frequency parameter of limit-cycle rigid-body wing-rock, a new experiment shows conclusively that although the motion is non-linear, the frequency parameter can be predicted by quasi-steady theory. As a consequence, for a given sweep angle, the frequency parameter is inversely proportional to the square root of the inertia in roll. This is an important observation when attempting to extrapolate from model tests in wind tunnels to predict the wing-rock characteristics of aircraft.
Developing and applying metamodels of high resolution ...
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or statistical relationships between a model’s input and output variables for model analysis, interpretation, or information consumption by users’ clients; (2) quantifying a model’s sensitivity to alternative or uncertain forcing functions, initial conditions, or parameters; and (3) characterizing the model’s response or state space. Using five existing models developed by US Environmental Protection Agency, we generate a metamodeling database of the expected environmental and biological concentrations of 644 organic chemicals released into nine US rivers from wastewater treatment works (WTWs) assuming multiple loading rates and sizes of populations serviced. The chemicals of interest have log n-octanol/water partition coefficients ( ) ranging from 3 to 14, and the rivers of concern have mean annual discharges ranging from 1.09 to 3240 m3/s. Log linear regression models are derived to predict mean annual dissolved and total water concentrations and total sediment concentrations of chemicals of concern based on their , Henry’s Law Constant, and WTW loading rate and on the mean annual discharges of the receiving rivers. Metamodels are also derived to predict mean annual chemical
Implementation of a new fuzzy vector control of induction motor.
Rafa, Souad; Larabi, Abdelkader; Barazane, Linda; Manceur, Malik; Essounbouli, Najib; Hamzaoui, Abdelaziz
2014-05-01
The aim of this paper is to present a new approach to control an induction motor using type-1 fuzzy logic. The induction motor has a nonlinear model, uncertain and strongly coupled. The vector control technique, which is based on the inverse model of the induction motors, solves the coupling problem. Unfortunately, in practice this is not checked because of model uncertainties. Indeed, the presence of the uncertainties led us to use human expertise such as the fuzzy logic techniques. In order to maintain the decoupling and to overcome the problem of the sensitivity to the parametric variations, the field-oriented control is replaced by a new block control. The simulation results show that the both control schemes provide in their basic configuration, comparable performances regarding the decoupling. However, the fuzzy vector control provides the insensitivity to the parametric variations compared to the classical one. The fuzzy vector control scheme is successfully implemented in real-time using a digital signal processor board dSPACE 1104. The efficiency of this technique is verified as well as experimentally at different dynamic operating conditions such as sudden loads change, parameter variations, speed changes, etc. The fuzzy vector control is found to be a best control for application in an induction motor. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui
2016-07-01
In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.
Numerical Simulation and Quantitative Uncertainty Assessment of Microchannel Flow
NASA Astrophysics Data System (ADS)
Debusschere, Bert; Najm, Habib; Knio, Omar; Matta, Alain; Ghanem, Roger; Le Maitre, Olivier
2002-11-01
This study investigates the effect of uncertainty in physical model parameters on computed electrokinetic flow of proteins in a microchannel with a potassium phosphate buffer. The coupled momentum, species transport, and electrostatic field equations give a detailed representation of electroosmotic and pressure-driven flow, including sample dispersion mechanisms. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. To quantify uncertainty, the governing equations are reformulated using a pseudo-spectral stochastic methodology, which uses polynomial chaos expansions to describe uncertain/stochastic model parameters, boundary conditions, and flow quantities. Integration of the resulting equations for the spectral mode strengths gives the evolution of all stochastic modes for all variables. Results show the spatiotemporal evolution of uncertainties in predicted quantities and highlight the dominant parameters contributing to these uncertainties during various flow phases. This work is supported by DARPA.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1992-01-01
The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.
NASA Astrophysics Data System (ADS)
Lutoff, C.; Anquetin, S.; Ruin, I.; Chassande, M.
2009-09-01
Flash floods are complex phenomena. The atmospheric and hydrological generating mechanisms of the phenomenon are not completely understood, leading to highly uncertain forecasts of and warnings for these events. On the other hand warning and crisis response to such violent and fast events is not a straightforward process. In both the social and physical aspect of the problem, space and time scales involved either in hydrometeorology, human behavior and social organizations sciences are of crucial importance. Forecasters, emergency managers, mayors, school superintendents, school transportation managers, first responders and road users, all have different time and space frameworks that they use to take emergency decision for themselves, their group or community. The integration of space and time scales of both the phenomenon and human activities is therefore a necessity to better deal with questions as forecasting lead-time and warning efficiency. The aim of this oral presentation is to focus on the spatio-temporal aspects of flash floods to improve our understanding of the event dynamic compared to the different scales of the social response. The authors propose a framework of analysis to compare the temporality of: i) the forecasts (from Méteo-France and from EFAS (Thielen et al., 2008)), ii) the meteorological and hydrological parameters, iii) the social response at different scales. The September 2005 event is particularly interesting for such analysis. The rainfall episode lasted nearly a week with two distinct phases separated by low intensity precipitations. Therefore the Méteo-France vigilance bulletin where somehow disconnected from the local flood’s impacts. Our analysis focuses on the timings of different types of local response, including the delicate issue of school transportation, in regard to the forecasts and the actual dynamic of the event.
Borrelli, O; Mancini, V; Thapar, N; Ribolsi, M; Emerenziani, S; de'Angelis, G; Bizzarri, B; Lindley, K J; Cicala, M
2014-04-01
The diagnostic corroboration of the relationship between gastro-oesophageal reflux disease (GERD) and chronic cough remains challenging. To compare oesophageal mucosal intercellular space diameter (ISD) in children with GERD, children with gastro-oesophageal reflux (GER)-related cough (GrC) and a control group, and to explore the relationship between baseline impedance levels and dilated ISD in children with GER-related cough. Forty children with GERD, 15 children with GrC and 12 controls prospectively underwent oesophagogastroduodenoscopy (EGD) with oesophageal biopsies taken 2-3 cm above squamocolumnar junction. ISD were quantified using transmission electron microscopy. Impedance-pH monitoring with evaluation of baseline impedance in the most distal impedance channel was performed in both patient groups. A significant difference in mean ISD values was found between GrC patients (0.9 ± 0.2 μm) and controls (0.5 ± 0.2 μm, P < 0.001), whereas there was no difference between GrC and GERD group (1 ± 0.3 μm, NS). No difference was found in the mean ISD between GrC children with or without pathological oesophageal acid exposure time (1 ± 0.3 vs. 0.9 ± 0.2 μm), and there was no correlation between ISD and any reflux parameter. Finally, there was no correlation between ISD and distal baseline impedance values (r:-0.35; NS). In children with reflux-related cough, dilated intercellular space diameter appears to be an objective and useful marker of oesophageal mucosal injury regardless of acid exposure, and its evaluation should be considered for those patients where the diagnosis is uncertain. In children with reflux-related cough, baseline impedance levels have no role in identifying reflux-induced oesophageal mucosal ultrastructural changes. © 2014 John Wiley & Sons Ltd.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
A new delay-independent condition for global robust stability of neural networks with time delays.
Samli, Ruya
2015-06-01
This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the adaptive sliding mode controller for a hyperchaotic fractional-order financial system
NASA Astrophysics Data System (ADS)
Hajipour, Ahamad; Hajipour, Mojtaba; Baleanu, Dumitru
2018-05-01
This manuscript mainly focuses on the construction, dynamic analysis and control of a new fractional-order financial system. The basic dynamical behaviors of the proposed system are studied such as the equilibrium points and their stability, Lyapunov exponents, bifurcation diagrams, phase portraits of state variables and the intervals of system parameters. It is shown that the system exhibits hyperchaotic behavior for a number of system parameters and fractional-order values. To stabilize the proposed hyperchaotic fractional system with uncertain dynamics and disturbances, an efficient adaptive sliding mode controller technique is developed. Using the proposed technique, two hyperchaotic fractional-order financial systems are also synchronized. Numerical simulations are presented to verify the successful performance of the designed controllers.
Fuzzy Neural Networks for Decision Support in Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakas, D. P.; Vlachos, D. S.; Simos, T. E.
There is a large number of parameters which one can take into account when building a negotiation model. These parameters in general are uncertain, thus leading to models which represents them with fuzzy sets. On the other hand, the nature of these parameters makes them very difficult to model them with precise values. During negotiation, these parameters play an important role by altering the outcomes or changing the state of the negotiators. One reasonable way to model this procedure is to accept fuzzy relations (from theory or experience). The action of these relations to fuzzy sets, produce new fuzzy setsmore » which describe now the new state of the system or the modified parameters. But, in the majority of these situations, the relations are multidimensional, leading to complicated models and exponentially increasing computational time. In this paper a solution to this problem is presented. The use of fuzzy neural networks is shown that it can substitute the use of fuzzy relations with comparable results. Finally a simple simulation is carried in order to test the new method.« less
Does intolerance of uncertainty predict anticipatory startle responses to uncertain threat?
Nelson, Brady D; Shankman, Stewart A
2011-08-01
Intolerance of uncertainty (IU) has been proposed to be an important maintaining factor in several anxiety disorders, including generalized anxiety disorder, obsessive-compulsive disorder, and social phobia. While IU has been shown to predict subjective ratings and decision-making during uncertain/ambiguous situations, few studies have examined whether IU also predicts emotional responding to uncertain threat. The present study examined whether IU predicted aversive responding (startle and subjective ratings) during the anticipation of temporally uncertain shocks. Sixty-nine participants completed three experimental conditions during which they received: no shocks, temporally certain/predictable shocks, and temporally uncertain shocks. Results indicated that IU was negatively associated with startle during the uncertain threat condition in that those with higher IU had a smaller startle response. IU was also only related to startle during the uncertain (and not the certain/predictable) threat condition, suggesting that it was not predictive of general aversive responding, but specific to responses to uncertain aversiveness. Perceived control over anxiety-related events mediated the relation between IU and startle to uncertain threat, such that high IU led to lowered perceived control, which in turn led to a smaller startle response. We discuss several potential explanations for these findings, including the inhibitory qualities of IU. Overall, our results suggest that IU is associated with attenuated aversive responding to uncertain threat. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Norton, A.; Rayner, P. J.; Scholze, M.; Koffi, E. N. D.
2016-12-01
The intercomparison study CMIP5 among other studies (e.g. Bodman et al., 2013) has shown that the land carbon flux contributes significantly to the uncertainty in projections of future CO2 concentration and climate (Friedlingstein et al., 2014)). The main challenge lies in disaggregating the relatively well-known net land carbon flux into its component fluxes, gross primary production (GPP) and respiration. Model simulations of these processes disagree considerably, and accurate observations of photosynthetic activity have proved a hindrance. Here we build upon the Carbon Cycle Data Assimilation System (CCDAS) (Rayner et al., 2005) to constrain estimates of one of these uncertain fluxes, GPP, using satellite observations of Solar Induced Fluorescence (SIF). SIF has considerable benefits over other proxy observations as it tracks not just the presence of vegetation but actual photosynthetic activity (Walther et al., 2016; Yang et al., 2015). To combine these observations with process-based simulations of GPP we have coupled the model SCOPE with the CCDAS model BETHY. This provides a mechanistic relationship between SIF and GPP, and the means to constrain the processes relevant to SIF and GPP via model parameters in a data assimilation system. We ingest SIF observations from NASA's Orbiting Carbon Observatory 2 (OCO-2) for 2015 into the data assimilation system to constrain estimates of GPP in space and time, while allowing for explicit consideration of uncertainties in parameters and observations. Here we present first results of the assimilation with SIF. Preliminary results indicate a constraint on global annual GPP of at least 75% when using SIF observations, reducing the uncertainty to < 3 PgC yr-1. A large portion of the constraint is propagated via parameters that describe leaf phenology. These results help to bring together state-of-the-art observations and model to improve understanding and predictive capability of GPP.
Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.
Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard
2011-12-01
Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2003-04-01
A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.
Cerulean Warbler Occurrence Atlas for Military Installations
2010-04-01
Army Ammuniton Plant (Closed) IN NO 2009 Army IOWA ARMY AMMUNITION PLANT IA UNCERTAIN 2009 USACE J. Percy Priest Lake TN UNCERTAIN 2009 USACE J...Stonewall Jackson Lake WV UNCERTAIN 2009 USACE Summersville Lake WV no POC Army Sunflower Army Ammunition Plant KS no POC USACE Sutton Lake WV UNCERTAIN
Giannella, Luca; Mfuta, Kabala; Tuzio, Antonella; Cerami, Lillo Bruno
2016-02-01
Retroperitoneal uterine leiomyoma is a very rare occurrence and to discover it as a cause of female sexual dysfunction in a teen is unusual. An 18-year-old black woman reported deep dyspareunia, resulting in severe distress. Gynecological and instrumental examinations showed a pelvic mass of 7 cm in diameter. The preoperative diagnosis was uterine fibroid, but the exact location of the leiomyoma was uncertain. Laparoscopic examination showed a pedunculated retroperitoneal cervical leiomyoma in the left pararectal space. After surgical excision of the mass, normal sexual activity was restored. When a teen experiences pain with intercourse, pelvic masses should be part of differential diagnosis of dyspareunia. Copyright © 2016 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang
1992-01-01
Both feedback and feedforward control approaches for uncertain dynamical systems (in particular, with uncertainty in structural mode frequency) are investigated. The control objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant uncertainty. Preshaping of an ideal, time optimal control input using a tapped-delay filter is shown to provide a fast settling time with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. It is shown that a properly designed, feedback controller performs well, as compared with a time optimal open loop controller with special preshaping for performance robustness. Also included are two separate papers by the same authors on this subject.
NASA Astrophysics Data System (ADS)
Atanasov, Victor
2017-07-01
We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.
García-Pérez, Miguel A.; Alcalá-Quintana, Rocío
2017-01-01
Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same–different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same–different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data. PMID:28747893
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
Iqbal, Muhammad; Rehan, Muhammad; Khaliq, Abdul; Saeed-ur-Rehman; Hong, Keum-Shik
2014-01-01
This paper investigates the chaotic behavior and synchronization of two different coupled chaotic FitzHugh-Nagumo (FHN) neurons with unknown parameters under external electrical stimulation (EES). The coupled FHN neurons of different parameters admit unidirectional and bidirectional gap junctions in the medium between them. Dynamical properties, such as the increase in synchronization error as a consequence of the deviation of neuronal parameters for unlike neurons, the effect of difference in coupling strengths caused by the unidirectional gap junctions, and the impact of large time-delay due to separation of neurons, are studied in exploring the behavior of the coupled system. A novel integral-based nonlinear adaptive control scheme, to cope with the infeasibility of the recovery variable, for synchronization of two coupled delayed chaotic FHN neurons of different and unknown parameters under uncertain EES is derived. Further, to guarantee robust synchronization of different neurons against disturbances, the proposed control methodology is modified to achieve the uniformly ultimately bounded synchronization. The parametric estimation errors can be reduced by selecting suitable control parameters. The effectiveness of the proposed control scheme is illustrated via numerical simulations.
Riley, Pete; Ben-Nun, Michal; Linker, Jon A.; Cost, Angelia A.; Sanchez, Jose L.; George, Dylan; Bacon, David P.; Riley, Steven
2015-01-01
The potential rapid availability of large-scale clinical episode data during the next influenza pandemic suggests an opportunity for increasing the speed with which novel respiratory pathogens can be characterized. Key intervention decisions will be determined by both the transmissibility of the novel strain (measured by the basic reproductive number R 0) and its individual-level severity. The 2009 pandemic illustrated that estimating individual-level severity, as described by the proportion p C of infections that result in clinical cases, can remain uncertain for a prolonged period of time. Here, we use 50 distinct US military populations during 2009 as a retrospective cohort to test the hypothesis that real-time encounter data combined with disease dynamic models can be used to bridge this uncertainty gap. Effectively, we estimated the total number of infections in multiple early-affected communities using the model and divided that number by the known number of clinical cases. Joint estimates of severity and transmissibility clustered within a relatively small region of parameter space, with 40 of the 50 populations bounded by: p C, 0.0133–0.150 and R 0, 1.09–2.16. These fits were obtained despite widely varying incidence profiles: some with spring waves, some with fall waves and some with both. To illustrate the benefit of specific pairing of rapidly available data and infectious disease models, we simulated a future moderate pandemic strain with p C approximately ×10 that of 2009; the results demonstrating that even before the peak had passed in the first affected population, R 0 and p C could be well estimated. This study provides a clear reference in this two-dimensional space against which future novel respiratory pathogens can be rapidly assessed and compared with previous pandemics. PMID:26402446
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.
Robust optimization of supersonic ORC nozzle guide vanes
NASA Astrophysics Data System (ADS)
Bufi, Elio A.; Cinnella, Paola
2017-03-01
An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Transdimensional Seismic Tomography
NASA Astrophysics Data System (ADS)
Bodin, T.; Sambridge, M.
2009-12-01
In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.
NASA Astrophysics Data System (ADS)
Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto
2017-04-01
We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie
2017-03-10
Broadband photometry of galaxies measures an unresolved mix of complex stellar populations, gas, and dust. Interpreting these data is a challenge for models: many studies have shown that properties derived from modeling galaxy photometry are uncertain by a factor of two or more, and yet answering key questions in the field now requires higher accuracy than this. Here, we present a new model framework specifically designed for these complexities. Our model, Prospector- α , includes dust attenuation and re-radiation, a flexible attenuation curve, nebular emission, stellar metallicity, and a six-component nonparametric star formation history. The flexibility and range of themore » parameter space, coupled with Monte Carlo Markov chain sampling within the Prospector inference framework, is designed to provide unbiased parameters and realistic error bars. We assess the accuracy of the model with aperture-matched optical spectroscopy, which was excluded from the fits. We compare spectral features predicted solely from fits to the broadband photometry to the observed spectral features. Our model predicts H α luminosities with a scatter of ∼0.18 dex and an offset of ∼0.1 dex across a wide range of morphological types and stellar masses. This agreement is remarkable, as the H α luminosity is dependent on accurate star formation rates, dust attenuation, and stellar metallicities. The model also accurately predicts dust-sensitive Balmer decrements, spectroscopic stellar metallicities, polycyclic aromatic hydrocarbon mass fractions, and the age- and metallicity-sensitive features D{sub n}4000 and H δ . Although the model passes all these tests, we caution that we have not yet assessed its performance at higher redshift or the accuracy of recovered stellar masses.« less
14 CFR 1214.813 - Computation of sharing and pricing parameters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...
Nuclear space power safety and facility guidelines study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehlman, W.F.
1995-09-11
This report addresses safety guidelines for space nuclear reactor power missions and was prepared by The Johns Hopkins University Applied Physics Laboratory (JHU/APL) under a Department of Energy grant, DE-FG01-94NE32180 dated 27 September 1994. This grant was based on a proposal submitted by the JHU/APL in response to an {open_quotes}Invitation for Proposals Designed to Support Federal Agencies and Commercial Interests in Meeting Special Power and Propulsion Needs for Future Space Missions{close_quotes}. The United States has not launched a nuclear reactor since SNAP 10A in April 1965 although many Radioisotope Thermoelectric Generators (RTGs) have been launched. An RTG powered system ismore » planned for launch as part of the Cassini mission to Saturn in 1997. Recently the Ballistic Missile Defense Office (BMDO) sponsored the Nuclear Electric Propulsion Space Test Program (NEPSTP) which was to demonstrate and evaluate the Russian-built TOPAZ II nuclear reactor as a power source in space. As of late 1993 the flight portion of this program was canceled but work to investigate the attributes of the reactor were continued but at a reduced level. While the future of space nuclear power systems is uncertain there are potential space missions which would require space nuclear power systems. The differences between space nuclear power systems and RTG devices are sufficient that safety and facility requirements warrant a review in the context of the unique features of a space nuclear reactor power system.« less
Morton's metatarsalgia: sonographic findings and correlated histopathology.
Read, J W; Noakes, J B; Kerr, D; Crichton, K J; Slater, H K; Bonar, F
1999-03-01
The results of 79 high resolution ultrasound examinations of the forefoot that were performed for suspected Morton's metatarsalgia were retrospectively assessed. Scans were only obtained if the pain was poorly localized or if there were atypical features that made the clinical diagnosis uncertain. Ultrasound detected 92 hypoechoic intermetatarsal web space masses in 63 patients. Surgery was performed on 23 web spaces in 22 patients where the response to nonsurgical management had been poor. The surgical specimens were retrieved and reviewed by a pathologist in 21 cases. The histopathology in 20 of 21 operated cases was that of Morton's neuroma; however, prominent mucoid degeneration was also found to involve the adjacent loose fibroadipose tissues in 19 of 20 neuroma specimens. Ultrasound was sensitive in the detection of web space abnormality (sensitivity, 0.95), but could not clearly separate Morton's neuroma from associated mass-like mucoid degeneration in the adjacent loose connective tissues. The implications of these observations for both diagnosis and treatment are discussed.
A new look at the Y tetraquarks and Ω _c baryons in the diquark model
NASA Astrophysics Data System (ADS)
Ali, Ahmed; Maiani, Luciano; Borisov, Anatoly V.; Ahmed, Ishtiaq; Aslam, M. Jamil; Parkhomenko, Alexander Ya.; Polosa, Antonio D.; Rehman, Abdur
2018-01-01
We analyze the hidden charm P-wave tetraquarks in the diquark model, using an effective Hamiltonian incorporating the dominant spin-spin, spin-orbit and tensor interactions. We compare with other P-wave systems such as P-wave charmonia and the newly discovered Ω _c baryons, analysed recently in this framework. Given the uncertain experimental situation on the Y states, we allow for different spectra and discuss the related parameters in the diquark model. In addition to the presently observed ones, we expect many more states in the supermultiplet of L=1 diquarkonia, whose J^{PC} quantum numbers and masses are worked out, using the parameters from the currently preferred Y-states pattern. The existence of these new resonances would be a decisive footprint of the underlying diquark dynamics.
Systematic Uncertainties in High-Energy Hadronic Interaction Models
NASA Astrophysics Data System (ADS)
Zha, M.; Knapp, J.; Ostapchenko, S.
2003-07-01
Hadronic interaction models for cosmic ray energies are uncertain since our knowledge of hadronic interactions is extrap olated from accelerator experiments at much lower energies. At present most high-energy models are based on Grib ov-Regge theory of multi-Pomeron exchange, which provides a theoretical framework to evaluate cross-sections and particle production. While experimental data constrain some of the model parameters, others are not well determined and are therefore a source of systematic uncertainties. In this paper we evaluate the variation of results obtained with the QGSJET model, when modifying parameters relating to three ma jor sources of uncertainty: the form of the parton structure function, the role of diffractive interactions, and the string hadronisation. Results on inelastic cross sections, on secondary particle production and on the air shower development are discussed.
Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.
An Adaptive Control Technology for Safety of a GTM-like Aircraft
NASA Technical Reports Server (NTRS)
Matsutani, Megumi; Crespo, Luis G.; Annaswamy, Anuradha; Jang, Jinho
2010-01-01
An adaptive control architecture for safe performance of a transport aircraft subject to various adverse conditions is proposed and verified in this report. This architecture combines a nominal controller based on a Linear Quadratic Regulator with integral action, and an adaptive controller that accommodates actuator saturation and bounded disturbances. The effectiveness of the baseline controller and its adaptive augmentation are evaluated using a stand-alone control veri fication methodology. Case studies that pair individual parameter uncertainties with critical flight maneuvers are studied. The resilience of the controllers is determined by evaluating the degradation in closed-loop performance resulting from increasingly larger deviations in the uncertain parameters from their nominal values. Symmetric and asymmetric actuator failures, flight upsets, and center of gravity displacements, are some of the uncertainties considered.
Formation Flying: The Future of Remote Sensing from Space
NASA Technical Reports Server (NTRS)
Leitner, Jesse
2004-01-01
Over the next two decades a revolution is likely to occur in how remote sensing of Earth, other planets or bodies, and a range of phenomena in the universe is performed from space. In particular, current launch vehicle fairing volume and mass constraints will continue to restrict the size of monolithic telescope apertures which can be launched to accommodate only slightly more performance capability than is achievable today, such as by the Hubble Space Telescope. Systems under formulation today, such as the James Webb Space Telescope will be able to increase aperture size and, hence, imaging resolution, by deploying segmented optics. However, this approach is limited as well, by ow ability to control such segments to optical tolerances over long distances with highly uncertain structural dynamics connecting them. Consequently, for orders of magnitude improved resolution as required for imaging black holes, imaging planets, or performing asteroseismology, the only viable approach will be to fly a collection of spacecraft in formation to synthesize a virtual segmented telescope or interferometer with very large baselines. This presentation describes some of the strategic science missions planned in the National Aeronautics and Space Administration, and identifies some of the critical technologies needed to enable some of the most challenging space missions ever conceived which have realistic hopes of flying.
Calibration Laboratory Capabilities Listing as of April 2009
NASA Technical Reports Server (NTRS)
Kennedy, Gary W.
2009-01-01
This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Superposition model analysis of the magnetocrystalline anisotropy of Ba-ferrite
NASA Astrophysics Data System (ADS)
Novák, Pavel
1994-06-01
Theoretical analysis of the first magnetocrystalline anisotropy constantK 1 of BaFe12O19 is performed. Two contributions toK 1 are considered — single ion anisotropy and dipolar anisotropy. ParameterD which determines the magnitude of the single ion contribution is calculated on the basis of the superposition model. It is argued that the disagreement between calculated and observed values ofK 1 is most likely connected with the contribution of Fe3+ ions on bipyramidal sites, for which the value ofD is uncertain.
Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation
NASA Astrophysics Data System (ADS)
Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua
2015-09-01
Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
NASA Astrophysics Data System (ADS)
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp
We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
NASA Astrophysics Data System (ADS)
Troldborg, Mads; Nowak, Wolfgang; Lange, Ida V.; Santos, Marta C.; Binning, Philip J.; Bjerg, Poul L.
2012-09-01
Mass discharge estimates are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Such estimates are, however, rather uncertain as they integrate uncertain spatial distributions of both concentration and groundwater flow. Here a geostatistical simulation method for quantifying the uncertainty of the mass discharge across a multilevel control plane is presented. The method accounts for (1) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics, (2) measurement uncertainty, and (3) uncertain source zone and transport parameters. The method generates conditional realizations of the spatial flow and concentration distribution. An analytical macrodispersive transport solution is employed to simulate the mean concentration distribution, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. The method has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is demonstrated on a field site contaminated with chlorinated ethenes. For this site, we show that including a physically meaningful concentration trend and the cosimulation of hydraulic conductivity and hydraulic gradient across the transect helps constrain the mass discharge uncertainty. The number of sampling points required for accurate mass discharge estimation and the relative influence of different data types on mass discharge uncertainty is discussed.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
14 CFR 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
On-orbit calibration for star sensors without priori information.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang
2017-07-24
The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.
2014-07-01
Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.
Formation Flying: The Future of Remote Sensing from Space
NASA Technical Reports Server (NTRS)
Leitner, Jesse
2004-01-01
Over the next two decades a revolution is likely to occur in how remote sensing of Earth, other planets or bodies, and a range of phenomena in the universe is performed from space. In particular, current launch vehicle fairing volume and mass constraints will continue to restrict the size of monolithic telescope apertures which can be launched to little or no greater size than that of the Hubble Space Telescope, the largest aperture currently flying in space. Systems under formulation today, such as the James Webb Space Telescope will be able to increase aperture size and, hence, imaging resolution, by deploying segmented optics. However, this approach is limited as well, by our ability to control such segments to optical tolerances over long distances with highly uncertain structural dynamics connecting them. Consequently, for orders of magnitude improved resolution as required for imaging black holes, imaging planets, or performing asteroseismology, the only viable approach will be to fly a collection of spacecraft in formation to synthesize a virtual segmented telescope or interferometer with very large baselines. This paper provides some basic definitions in the area of formation flying, describes some of the strategic science missions planned in the National Aeronautics and Space Administration, and identifies some of the critical technologies needed to enable some of the most challenging space missions ever conceived which have realistic hopes of flying.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A
2009-10-01
Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Characterizing traffic under uncertain disruptions : an experimental approach.
DOT National Transportation Integrated Search
2013-03-01
The objective of the research is to study long-term traffic patterns under uncertain disruptions using : data collected from human subjects who simultaneously make route choices in controlled PC-based : laboratory experiments. Uncertain disruptions t...
14 CFR § 1214.117 - Launch and orbit parameters for a standard launch.
Code of Federal Regulations, 2014 CFR
2014-01-01
... flights: (1) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Launch and orbit parameters for a standard launch. § 1214.117 Section § 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE...
Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing
NASA Astrophysics Data System (ADS)
Datta, D.
2010-10-01
Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.
NASA Astrophysics Data System (ADS)
Nadège Ilembe Badouna, Audrey; Veres, Cristina; Haddy, Nadia; Bidault, François; Lefkopoulos, Dimitri; Chavaudra, Jean; Bridier, André; de Vathaire, Florent; Diallo, Ibrahima
2012-01-01
The aim of this paper was to determine anthropometric parameters leading to the least uncertain estimate of heart size when connecting a computational phantom to an external beam radiation therapy (EBRT) patient. From computed tomography images, we segmented the heart and calculated its total volume (THV) in a population of 270 EBRT patients of both sexes, aged 0.7-83 years. Our data were fitted using logistic growth functions. The patient age, height, weight, body mass index and body surface area (BSA) were used as explanatory variables. For both genders, good fits were obtained with both weight (R2 = 0.89 for males and 0.83 for females) and BSA (R2 = 0.90 for males and 0.84 for females). These results demonstrate that, among anthropometric parameters, weight plays an important role in predicting THV. These findings should be taken into account when assigning a computational phantom to a patient.
An inexact reverse logistics model for municipal solid waste management systems.
Zhang, Yi Mei; Huang, Guo He; He, Li
2011-03-01
This paper proposed an inexact reverse logistics model for municipal solid waste management systems (IRWM). Waste managers, suppliers, industries and distributors were involved in strategic planning and operational execution through reverse logistics management. All the parameters were assumed to be intervals to quantify the uncertainties in the optimization process and solutions in IRWM. To solve this model, a piecewise interval programming was developed to deal with Min-Min functions in both objectives and constraints. The application of the model was illustrated through a classical municipal solid waste management case. With different cost parameters for landfill and the WTE, two scenarios were analyzed. The IRWM could reflect the dynamic and uncertain characteristics of MSW management systems, and could facilitate the generation of desired management plans. The model could be further advanced through incorporating methods of stochastic or fuzzy parameters into its framework. Design of multi-waste, multi-echelon, multi-uncertainty reverse logistics model for waste management network would also be preferred. Copyright © 2010 Elsevier Ltd. All rights reserved.
Panaceas, uncertainty, and the robust control framework in sustainability science
Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan
2007-01-01
A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574
Jespersen, Sune N.; Bjarkam, Carsten R.; Nyengaard, Jens R.; Chakravarty, M. Mallar; Hansen, Brian; Vosegaard, Thomas; Østergaard, Leif; Yablonskiy, Dmitriy; Nielsen, Niels Chr.; Vestergaard-Poulsen, Peter
2010-01-01
Due to its unique sensitivity to tissue microstructure, diffusion-weighted magnetic resonance imaging (MRI) has found many applications in clinical and fundamental science. With few exceptions, a more precise correspondence between physiological or biophysical properties and the obtained diffusion parameters remain uncertain due to lack of specificity. In this work, we address this problem by comparing diffusion parameters of a recently introduced model for water diffusion in brain matter to light microscopy and quantitative electron microscopy. Specifically, we compare diffusion model predictions of neurite density in rats to optical myelin staining intensity and stereological estimation of neurite volume fraction using electron microscopy. We find that the diffusion model describes data better and that its parameters show stronger correlation with optical and electron microscopy, and thus reflect myelinated neurite density better than the more frequently used diffusion tensor imaging (DTI) and cumulant expansion methods. Furthermore, the estimated neurite orientations capture dendritic architecture more faithfully than DTI diffusion ellipsoids. PMID:19732836
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Huang, Guo H.
2011-12-01
Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
NASA Astrophysics Data System (ADS)
Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.
2016-12-01
Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.
Nuclear modification factor in an anisotropic quark-gluon plasma
NASA Astrophysics Data System (ADS)
Mandal, Mahatsab; Bhattacharya, Lusaka; Roy, Pradip
2011-10-01
We calculate the nuclear modification factor (RAA) of light hadrons by taking into account the initial state momentum anisotropy of the quark-gluon plasma (QGP) expected to be formed in relativistic heavy ion collisions. Such an anisotropy can result from the initial rapid longitudinal expansion of the matter. A phenomenological model for the space-time evolution of the anisotropic QGP is used to obtain the time dependence of the anisotropy parameter ξ and the hard momentum scale, phard. The result is then compared with the PHENIX experimental data to constrain the isotropization time scale, τiso for fixed initial conditions (FIC). It is shown that the extracted value of τiso lies in the range 0.5⩽τiso⩽1.5. However, using a fixed final multiplicity (FFM) condition does not lead to any firm conclusion about the extraction of the isotropization time. The present calculation is also extended to contrast with the recent measurement of nuclear modification factor by the ALICE collaboration at s=2.76 TeV. It is argued that in the present approach, the extraction of τiso at this energy is uncertain and, therefore, refinement of the model is necessary. The sensitivity of the results on the initial conditions has been discussed. We also present the nuclear modification factor at Large Hadron Collider (LHC) energies with s=5.5 TeV.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain
2015-04-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.
A probabilistic approach to emissions from transportation sector in the coming decades
NASA Astrophysics Data System (ADS)
Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.
2010-12-01
Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Sensitivity of Dynamical Systems to Banach Space Parameters
2005-02-13
We consider general nonlinear dynamical systems in a Banach space with dependence on parameters in a second Banach space. An abstract theoretical ... framework for sensitivity equations is developed. An application to measure dependent delay differential systems arising in a class of HIV models is presented.