Reexamination of optimal quantum state estimation of pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2005-09-15
A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less
Optimal estimation of two-qubit pure-state entanglement
NASA Astrophysics Data System (ADS)
Acín, Antonio; Tarrach, Rolf; Vidal, Guifré
2000-06-01
We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.
Orlowska-Kowalska, Teresa; Kaminski, Marcin
2014-01-01
The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.
Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2002-01-01
Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.
Optimized parameter estimation in the presence of collective phase noise
NASA Astrophysics Data System (ADS)
Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried
2016-11-01
We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.
Optimal estimation of entanglement in optical qubit systems
NASA Astrophysics Data System (ADS)
Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander P.
2011-05-01
We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Optimal estimation of parameters and states in stochastic time-varying systems with time delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-08-01
In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm
NASA Astrophysics Data System (ADS)
Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi
2017-11-01
In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Attention control learning in the decision space using state estimation
NASA Astrophysics Data System (ADS)
Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid
2016-05-01
The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank
2014-12-07
We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less
NASA Technical Reports Server (NTRS)
Canfield, Stephen
1999-01-01
This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Airborne data measurement system errors reduction through state estimation and control optimization
NASA Astrophysics Data System (ADS)
Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.
2018-02-01
The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.
NASA Astrophysics Data System (ADS)
Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui
2018-07-01
During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Neff, Christina R.
1994-05-01
The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.
Unsteady flow sensing and optimal sensor placement using machine learning
NASA Astrophysics Data System (ADS)
Semaan, Richard
2016-11-01
Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.
Quantum metrology of spatial deformation using arrays of classical and quantum light emitters
NASA Astrophysics Data System (ADS)
Sidhu, Jasminder S.; Kok, Pieter
2017-06-01
We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Jim; Flicker, Dawn; Ide, Kayo
2006-05-20
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from amore » single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.« less
An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Zavala, Victor M.
We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
Estimation of power lithium-ion battery SOC based on fuzzy optimal decision
NASA Astrophysics Data System (ADS)
He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin
2018-06-01
In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.
Ocean state estimation for climate studies
NASA Technical Reports Server (NTRS)
Lee, T.
2002-01-01
Climate variabilities, which are of interest to CLIVAR, involve a broad range of spatial and temporal scales. Ocean state estimation (often referred to as ocean data assimilation), by optimally combining observations and models, becomes an important element of CLIVAR.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
A model of cerebellar computations for dynamical state estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.; Assad, C.
2001-01-01
The cerebellum is a neural structure that is essential for agility in vertebrate movements. Its contribution to motor control appears to be due to a fundamental role in dynamical state estimation, which also underlies its role in various non-motor tasks. Single spikes in vestibular sensory neurons carry information about head state. We show how computations for optimal dynamical state estimation may be accomplished when signals are encoded in spikes. This provides a novel way to design dynamical state estimators, and a novel way to interpret the structure and function of the cerebellum.
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.
On optimal control of linear systems in the presence of multiplicative noise
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1976-01-01
This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.
NASA Astrophysics Data System (ADS)
Wu, C.; Margulis, S. A.
2007-12-01
Wastewater re-use via crop irrigation has the potential to be an effective means of wastewater disposal. However, nitrate in wastewater may contaminate groundwater if it does not decay before reaching the groundwater table. In order to dispose of wastewater while preventing long-term groundwater pollution, irrigation rates need to be optimized based on the current and predicted states of the soil, such as soil moisture content and/or nitrate concentration. A real-time soil states estimation system using the Ensemble Kalman Filter (EnKF) has been developed for application to a test bed for wastewater re-use in Palmdale, CA. This test bed, covered with alfalfa, is a 30-acre irrigation plot with a 200-meter long rotating pivot arm that irrigates the area with reclaimed wastewater. A sensor network is deployed in the soil near the surface. The data assimilation system has shown the ability to characterize soil states and fluxes from sparse measurements. The real-time estimation system will then be used to explore the potential feedback for optimizing the sprinkler operation (i.e. maximizing the magnitude of wastewater release while minimizing the ultimate groundwater pollution). In optimization models, soil states and fluxes can be regarded as functions of irrigation rate. Through optimization, the irrigation rate in a finite horizon can be maximized while still satisfying all criteria in soil states and fluxes to ensure the safety of groundwater. Since the data assimilation system provides reliable estimation of soil states and fluxes, it is expected to define the optimal irrigation rate with higher confidence compared to using models or sensors only.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Progress in navigation filter estimate fusion and its application to spacecraft rendezvous
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1994-01-01
A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
NASA Astrophysics Data System (ADS)
Wu, Yun-jie; Li, Guo-fei
2018-01-01
Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.
NASA Technical Reports Server (NTRS)
Garcia, F., Jr.
1974-01-01
A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.
Flexible resources for quantum metrology
NASA Astrophysics Data System (ADS)
Friis, Nicolai; Orsucci, Davide; Skotiniotis, Michalis; Sekatski, Pavel; Dunjko, Vedran; Briegel, Hans J.; Dür, Wolfgang
2017-06-01
Quantum metrology offers a quadratic advantage over classical approaches to parameter estimation problems by utilising entanglement and nonclassicality. However, the hurdle of actually implementing the necessary quantum probe states and measurements, which vary drastically for different metrological scenarios, is usually not taken into account. We show that for a wide range of tasks in metrology, 2D cluster states (a particular family of states useful for measurement-based quantum computation) can serve as flexible resources that allow one to efficiently prepare any required state for sensing, and perform appropriate (entangled) measurements using only single qubit operations. Crucially, the overhead in the number of qubits is less than quadratic, thus preserving the quantum scaling advantage. This is ensured by using a compression to a logarithmically sized space that contains all relevant information for sensing. We specifically demonstrate how our method can be used to obtain optimal scaling for phase and frequency estimation in local estimation problems, as well as for the Bayesian equivalents with Gaussian priors of varying widths. Furthermore, we show that in the paradigmatic case of local phase estimation 1D cluster states are sufficient for optimal state preparation and measurement.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
NASA Astrophysics Data System (ADS)
Bania, Piotr; Baranowski, Jerzy
2018-02-01
Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Digital flight control systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Vanlandingham, H. F.
1977-01-01
The design of stable feedback control laws for sampled-data systems with variable rate sampling was investigated. These types of sampled-data systems arise naturally in digital flight control systems which use digital actuators where it is desirable to decrease the number of control computer output commands in order to save wear and tear of the associated equipment. The design of aircraft control systems which are optimally tolerant of sensor and actuator failures was also studied. Detection of the failed sensor or actuator must be resolved and if the estimate of the state is used in the control law, then it is also desirable to have an estimator which will give the optimal state estimate even under the failed conditions.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
NASA Astrophysics Data System (ADS)
Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige
2015-05-01
The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.
Coherence in quantum estimation
NASA Astrophysics Data System (ADS)
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
Optimal multi-dimensional poverty lines: The state of poverty in Iraq
NASA Astrophysics Data System (ADS)
Ameen, Jamal R. M.
2017-09-01
Poverty estimation based on calories intake is unrealistic. The established concept of multidimensional poverty has methodological weaknesses in the treatment of different dimensions and there is disagreement in methods of combining them into a single poverty line. This paper introduces a methodology to estimate optimal multidimensional poverty lines and uses the Iraqi household socio-economic survey data of 2012 to demonstrate the idea. The optimal poverty line for Iraq is found to be 170.5 Thousand Iraqi Dinars (TID).
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.
Sun, Kangkang; Sui, Shuai; Tong, Shaocheng
2018-04-01
This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-11-01
With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Optimal parameter estimation with a fixed rate of abstention
NASA Astrophysics Data System (ADS)
Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.
2013-07-01
The problems of optimally estimating a phase, a direction, and the orientation of a Cartesian frame (or trihedron) with general pure states are addressed. Special emphasis is put on estimation schemes that allow for inconclusive answers or abstention. It is shown that such schemes enable drastic improvements, up to the extent of attaining the Heisenberg limit in some cases, and the required amount of abstention is quantified. A general mathematical framework to deal with the asymptotic limit of many qubits or large angular momentum is introduced and used to obtain analytical results for all the relevant cases under consideration. Parameter estimation with abstention is also formulated as a semidefinite programming problem, for which very efficient numerical optimization techniques exist.
Robust Controller for Turbulent and Convective Boundary Layers
2006-08-01
filter and an optimal regulator. The Kalman filter equation and the optimal regulator equation corresponding to the state-space equations, (2.20), are...separate steady-state algebraic Riccati equations. The Kalman filter is used here as a state observer rather than as an estimator since no noises are...2001) which will not be repeated here. For robustness, in the design, the Kalman filter input matrix G has been set equal to the control input
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
Last-position elimination-based learning automata.
Zhang, Junqi; Wang, Cheng; Zhou, MengChu
2014-12-01
An update scheme of the state probability vector of actions is critical for learning automata (LA). The most popular is the pursuit scheme that pursues the estimated optimal action and penalizes others. This paper proposes a reverse philosophy that leads to last-position elimination-based learning automata (LELA). The action graded last in terms of the estimated performance is penalized by decreasing its state probability and is eliminated when its state probability becomes zero. All active actions, that is, actions with nonzero state probability, equally share the penalized state probability from the last-position action at each iteration. The proposed LELA is characterized by the relaxed convergence condition for the optimal action, the accelerated step size of the state probability update scheme for the estimated optimal action, and the enriched sampling for the estimated nonoptimal actions. The proof of the ϵ-optimal property for the proposed algorithm is presented. Last-position elimination is a widespread philosophy in the real world and has proved to be also helpful for the update scheme of the learning automaton via the simulations of well-known benchmark environments. In the simulations, two versions of the LELA, using different selection strategies of the last action, are compared with the classical pursuit algorithms Discretized Pursuit Reward-Inaction (DP(RI)) and Discretized Generalized Pursuit Algorithm (DGPA). Simulation results show that the proposed schemes achieve significantly faster convergence and higher accuracy than the classical ones. Specifically, the proposed schemes reduce the interval to find the best parameter for a specific environment in the classical pursuit algorithms. Thus, they can have their parameter tuning easier to perform and can save much more time when applied to a practical case. Furthermore, the convergence curves and the corresponding variance coefficient curves of the contenders are illustrated to characterize their essential differences and verify the analysis results of the proposed algorithms.
Method and system for diagnostics of apparatus
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2012-01-01
Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.
On-Orbit Multi-Field Wavefront Control with a Kalman Filter
NASA Technical Reports Server (NTRS)
Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David
2008-01-01
A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.
Finite element approximation of an optimal control problem for the von Karman equations
NASA Technical Reports Server (NTRS)
Hou, L. Steven; Turner, James C.
1994-01-01
This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal Verification of Entangled States with Local Measurements
NASA Astrophysics Data System (ADS)
Pallister, Sam; Linden, Noah; Montanaro, Ashley
2018-04-01
Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterize the output state by quantum state tomography, or alternatively, to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal among local verification strategies for 2-qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, nonadaptive, and noncollective measurements only incurs a constant-factor penalty over a strategy without these restrictions.
Control-enhanced multiparameter quantum estimation
NASA Astrophysics Data System (ADS)
Liu, Jing; Yuan, Haidong
2017-10-01
Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Perez, Danny
2018-05-01
A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.
The link between judgments of comparative risk and own risk: further evidence.
Gold, Ron S
2007-03-01
Individuals typically believe that they are less likely than the average person to experience negative events, a phenomenon termed "unrealistic optimism". The direct method of assessing unrealistic optimism employs a question of the form, "Compared with the average person, what is the chance that X will occur to you?". However, it has been proposed that responses to such a question (direct-estimates) are based essentially just on estimates that X will occur to the self (self-estimates). If this is so, any factors that affect one of these estimates should also affect the other. This prediction was tested in two experiments. In each, direct- and self-estimates for an unfamiliar health threat - homocysteine-related heart problems - were recorded. It was found that both types of estimate were affected in the same way by varying the stated probability of having unsafe levels of homocysteine (Study 1, N=149) and varying the stated probability that unsafe levels of homocysteine will lead to heart problems (Study 2, N=111). The results are consistent with the proposal that direct-estimates are constructed just from self-estimates.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1986-01-01
Estimates of annual average daily traffic volumes are important in the planning and operations of state highway departments. These estimates are used in the planning of new construction and improvement of existing facilities, and, in some cases, in t...
Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.
Adesso, Gerardo; Illuminati, Fabrizio
2005-10-07
We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.
An optimal state estimation model of sensory integration in human postural balance
NASA Astrophysics Data System (ADS)
Kuo, Arthur D.
2005-09-01
We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.
Information geometry of Gaussian channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; CNR-INFM Coherentia, Napoli; CNISM Unita di Salerno
2010-06-15
We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirablemore » properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).« less
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
NASA Astrophysics Data System (ADS)
Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai
2018-02-01
An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.
NASA Astrophysics Data System (ADS)
Lubey, D.; Scheeres, D.
Tracking objects in Earth orbit is fraught with complications. This is due to the large population of orbiting spacecraft and debris that continues to grow, passive (i.e. no direct communication) and data-sparse observations, and the presence of maneuvers and dynamics mismodeling. Accurate orbit determination in this environment requires an algorithm to capture both a system's state and its state dynamics in order to account for mismodelings. Previous studies by the authors yielded an algorithm called the Optimal Control Based Estimator (OCBE) - an algorithm that simultaneously estimates a system's state and optimal control policies that represent dynamic mismodeling in the system for an arbitrary orbit-observer setup. The stochastic properties of these estimated controls are then used to determine the presence of mismodelings (maneuver detection), as well as characterize and reconstruct the mismodelings. The purpose of this paper is to develop the OCBE into an accurate real-time orbit tracking and maneuver detection algorithm by automating the algorithm and removing its linear assumptions. This results in a nonlinear adaptive estimator. In its original form the OCBE had a parameter called the assumed dynamic uncertainty, which is selected by the user with each new measurement to reflect the level of dynamic mismodeling in the system. This human-in-the-loop approach precludes real-time application to orbit tracking problems due to their complexity. This paper focuses on the Adaptive OCBE, a version of the estimator where the assumed dynamic uncertainty is chosen automatically with each new measurement using maneuver detection results to ensure that state uncertainties are properly adjusted to account for all dynamic mismodelings. The paper also focuses on a nonlinear implementation of the estimator. Originally, the OCBE was derived from a nonlinear cost function then linearized about a nominal trajectory, which is assumed to be ballistic (i.e. the nominal optimal control policy is zero for all times). In this paper, we relax this assumption on the nominal trajectory in order to allow for controlled nominal trajectories. This allows the estimator to be iterated to obtain a more accurate nonlinear solution for both the state and control estimates. Beyond these developments to the estimator, this paper also introduces a modified distance metric for maneuver detection. The original metric used in the OCBE only accounted for the estimated control and its uncertainty. This new metric accounts for measurement deviation and a priori state deviations, such that it accounts for all three major forms of uncertainty in orbit determination. This allows the user to understand the contributions of each source of uncertainty toward the total system mismodeling so that the user can properly account for them. Together these developments create an accurate orbit determination algorithm that is automated, robust to mismodeling, and capable of detecting and reconstructing the presence of mismodeling. These qualities make this algorithm a good foundation from which to approach the problem of real-time maneuver detection and reconstruction for Space Situational Awareness applications. This is further strengthened by the algorithm's general formulation that allows it to be applied to problems with an arbitrary target and observer.
An empirical model for optimal highway durability in cold regions.
DOT National Transportation Integrated Search
2016-03-10
We develop an empirical tool to estimate optimal highway durability in cold regions. To test the model, we assemble a data set : containing all highway construction and maintenance projects in Arizona and Washington State from 1990 to 2014. The data ...
NASA Technical Reports Server (NTRS)
Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.
2004-01-01
This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.
NASA Astrophysics Data System (ADS)
Adhikari, Satyabrata
2018-04-01
Structural physical approximation (SPA) has been exploited to approximate nonphysical operation such as partial transpose. It has already been studied in the context of detection of entanglement and found that if the minimum eigenvalue of SPA to partial transpose is less than 2/9 then the two-qubit state is entangled. We find application of SPA to partial transpose in the estimation of the optimal singlet fraction. We show that the optimal singlet fraction can be expressed in terms of the minimum eigenvalue of SPA to partial transpose. We also show that the optimal singlet fraction can be realized using Hong-Ou-Mandel interferometry with only two detectors. Further we have shown that the generated hybrid entangled state between a qubit and a binary coherent state can be used as a resource state in quantum teleportation.
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?
NASA Astrophysics Data System (ADS)
Ferrie, Chris; Blume-Kohout, Robin
2011-03-01
Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
An estimator-predictor approach to PLL loop filter design
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1986-01-01
An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.
Probabilistic models in human sensorimotor control
Wolpert, Daniel M.
2009-01-01
Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731
Optimized quantum sensing with a single electron spin using real-time adaptive measurements.
Bonato, C; Blok, M S; Dinani, H T; Berry, D W; Markham, M L; Twitchen, D J; Hanson, R
2016-03-01
Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz(-1/2) over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.
Optimized quantum sensing with a single electron spin using real-time adaptive measurements
NASA Astrophysics Data System (ADS)
Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.
2016-03-01
Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, K. F.; Belvin, W. Keith
1991-01-01
A second-order form of discrete Kalman filtering equations is proposed as a candidate state estimator for efficient simulations of control-structure interactions in coupled physical coordinate configurations as opposed to decoupled modal coordinates. The resulting matrix equation of the present state estimator consists of the same symmetric, sparse N x N coupled matrices of the governing structural dynamics equations as opposed to unsymmetric 2N x 2N state space-based estimators. Thus, in addition to substantial computational efficiency improvement, the present estimator can be applied to control-structure design optimization for which the physical coordinates associated with the mass, damping and stiffness matrices of the structure are needed instead of modal coordinates.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
Improving stability margins in discrete-time LQG controllers
NASA Technical Reports Server (NTRS)
Oranc, B. Tarik; Phillips, Charles L.
1987-01-01
Some of the problems are discussed which are encountered in the design of discrete-time stochastic controllers for problems that may adequately be described by the Linear Quadratic Gaussian (LQG) assumptions; namely, the problems of obtaining acceptable relative stability, robustness, and disturbance rejection properties. A dynamic compensator is proposed to replace the optimal full state feedback regulator gains at steady state, provided that all states are measurable. The compensator increases the stability margins at the plant input, which may possibly be inadequate in practical applications. Though the optimal regulator has desirable properties the observer based controller as implemented with a Kalman filter, in a noisy environment, has inadequate stability margins. The proposed compensator is designed to match the return difference matrix at the plant input to that of the optimal regulator while maintaining the optimality of the state estimates as directed by the measurement noise characteristics.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
NASA Astrophysics Data System (ADS)
Swanson, Steven Roy
The objective of the dissertation is to improve state estimation performance, as compared to a Kalman filter, when non-constant, or changing, biases exist in the measurement data. The state estimation performance increase will come from the use of a fuzzy model to determine the position and velocity gains of a state estimator. A method is proposed for incorporating heuristic knowledge into a state estimator through the use of a fuzzy model. This method consists of using a fuzzy model to determine the gains of the state estimator, converting the heuristic knowledge into the fuzzy model, and then optimizing the fuzzy model with a genetic algorithm. This method is applied to the problem of state estimation of a cascaded global positioning system (GPS)/inertial reference unit (IRU) navigation system. The GPS position data contains two major sources for position bias. The first bias is due to satellite errors and the second is due to the time delay or lag from when the GPS position is calculated until it is used in the state estimator. When a change in the bias of the measurement data occurs, a state estimator will converge on the new measurement data solution. This will introduce errors into a Kalman filter's estimated state velocities, which in turn will cause a position overshoot as it converges. By using a fuzzy model to determine the gains of a state estimator, the velocity errors and their associated deficiencies can be reduced.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Stephen R., E-mail: stephen.thompson@sesiahs.health.nsw.gov.au; Department of Radiation Oncology, Prince of Wales Hospital, Sydney; University of New South Wales, Sydney
Purpose: We aimed to estimate the optimal proportion of all gynecological cancers that should be treated with brachytherapy (BT)-the optimal brachytherapy utilization rate (BTU)-to compare this with actual gynecological BTU and to assess the effects of nonmedical factors on access to BT. Methods and Materials: The previously constructed inter/multinational guideline-based peer-reviewed models of optimal BTU for cancers of the uterine cervix, uterine corpus, and vagina were combined to estimate optimal BTU for all gynecological cancers. The robustness of the model was tested by univariate and multivariate sensitivity analyses. The resulting model was applied to New South Wales (NSW), the Unitedmore » States, and Western Europe. Actual BTU was determined for NSW by a retrospective patterns-of-care study of BT; for Western Europe from published reports; and for the United States from Surveillance, Epidemiology, and End Results data. Differences between optimal and actual BTU were assessed. The effect of nonmedical factors on access to BT in NSW were analyzed. Results: Gynecological BTU was as follows: NSW 28% optimal (95% confidence interval [CI] 26%-33%) compared with 14% actual; United States 30% optimal (95% CI 26%-34%) and 10% actual; and Western Europe 27% optimal (95% CI 25%-32%) and 16% actual. On multivariate analysis, NSW patients were more likely to undergo gynecological BT if residing in Area Health Service equipped with BT (odds ratio 1.76, P=.008) and if residing in socioeconomically disadvantaged postcodes (odds ratio 1.12, P=.05), but remoteness of residence was not significant. Conclusions: Gynecological BT is underutilized in NSW, Western Europe, and the United States given evidence-based guidelines. Access to BT equipment in NSW was significantly associated with higher utilization rates. Causes of underutilization elsewhere were undetermined. Our model of optimal BTU can be used as a quality assurance tool, providing an evidence-based benchmark against which actual patterns of practice can be measured. It can also be used to assist in determining the adequacy of BT resource allocation.« less
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Optimal estimates of free energies from multistate nonequilibrium work data.
Maragakis, Paul; Spichty, Martin; Karplus, Martin
2006-03-17
We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen
This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vectormore » regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.« less
Taxation of United States general aviation
NASA Astrophysics Data System (ADS)
Sobieralski, Joseph Bernard
General aviation in the United States has been an important part of the economy and American life. General aviation is defined as all flying excluding military and scheduled airline operations, and is utilized in many areas of our society. The majority of aircraft operations and airports in the United States are categorized as general aviation, and general aviation contributes more than one percent to the United States gross domestic product each year. Despite the many benefits of general aviation, the lead emissions from aviation gasoline consumption are of great concern. General aviation emits over half the lead emissions in the United States or over 630 tons in 2005. The other significant negative externality attributed to general aviation usage is aircraft accidents. General aviation accidents have caused over 8000 fatalities over the period 1994-2006. A recent Federal Aviation Administration proposed increase in the aviation gasoline tax from 19.4 to 70.1 cents per gallon has renewed interest in better understanding the implications of such a tax increase as well as the possible optimal rate of taxation. Few studies have examined aviation fuel elasticities and all have failed to study general aviation fuel elasticities. Chapter one fills that gap and examines the elasticity of aviation gasoline consumption in United States general aviation. Utilizing aggregate time series and dynamic panel data, the price and income elasticities of demand are estimated. The price elasticity of demand for aviation gasoline is estimated to range from -0.093 to -0.185 in the short-run and from -0.132 to -0.303 in the long-run. These results prove to be similar in magnitude to automobile gasoline elasticities and therefore tax policies could more closely mirror those of automobile tax policies. The second chapter examines the costs associated with general aviation accidents. Given the large number of general aviation operations as well as the large number of fatalities and injuries attributed to general aviation accidents in the United States, understanding the costs to society is of great importance. This chapter estimates the direct and indirect costs associated with general aviation accidents in the United States. The indirect costs are estimated via the human capital approach in addition to the willingness-to-pay approach. The average annual accident costs attributed to general aviation are found to be 2.32 billion and 3.81 billion (2006 US) utilizing the human capital approach and willingness-to-pay approach, respectively. These values appear to be fairly robust when subjected to a sensitivity analysis. These costs highlight the large societal benefits from accident and fatality reduction. The final chapter derives a second-best optimal aviation gasoline tax developed from previous general equilibrium frameworks. This optimal tax reflects both the lead pollution and accident externalities, as well as the balance between excise taxes and labor taxes to finance government spending. The calculated optimal tax rate is 4.07 per gallon, which is over 20 times greater than the current tax rate and 5 times greater than the Federal Aviation Administration proposed tax rate. The calculated optimal tax rate is also over 3 times greater than automobile gasoline optimal tax rates calculated by previous studies. The Pigovian component is 1.36, and we observe that the accident externality is taxed more severely than the pollution externality. The largest component of the optimal tax rate is the Ramsey component. At 2.70, the Ramsey component reflects the ability of the government to raise revenue aviation gasoline which is price inelastic. The calculated optimal tax is estimated to reduce lead emissions by over 10 percent and reduce accidents by 20 percent. Although unlikely to be adopted by policy makers, the optimal tax benefits are apparent and it sheds light on the need to reduce these negative externalities via policy changes.
Continuous-variable phase estimation with unitary and random linear disturbance
NASA Astrophysics Data System (ADS)
Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.
2014-10-01
We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.
Blom, Philip Stephen; Marcillo, Omar Eduardo
2016-12-05
A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Global optimization for quantum dynamics of few-fermion systems
NASA Astrophysics Data System (ADS)
Li, Xikun; Pecak, Daniel; Sowiński, Tomasz; Sherson, Jacob; Nielsen, Anne E. B.
2018-03-01
Quantum state preparation is vital to quantum computation and quantum information processing tasks. In adiabatic state preparation, the target state is theoretically obtained with nearly perfect fidelity if the control parameter is tuned slowly enough. As this, however, leads to slow dynamics, it is often desirable to be able to carry out processes more rapidly. In this work, we employ two global optimization methods to estimate the quantum speed limit for few-fermion systems confined in a one-dimensional harmonic trap. Such systems can be produced experimentally in a well-controlled manner. We determine the optimized control fields and achieve a reduction in the ramping time of more than a factor of four compared to linear ramping. We also investigate how robust the fidelity is to small variations of the control fields away from the optimized shapes.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Near Hartree-Fock quality GTO basis sets for the second-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry
1987-01-01
Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Optimal stimulus scheduling for active estimation of evoked brain networks.
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
Optimal stimulus scheduling for active estimation of evoked brain networks
NASA Astrophysics Data System (ADS)
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell; Nagarajan, Harsha; Yamangil, Emre
2016-06-24
MICOT is a tool for optimizing and controlling infrastructure systems. In includes modules for optimizing the operations of an infrastructure structure (for example optimal dispatch), designing infrastructure systems, restoring infrastructures systems, resiliency, preparing for natural disasters, interdicting networks, state estimation, sensor placement, and simulation of infrastructure systems. It implements algorithms developed at LANL that have been published in the academic community. This is a release of the of resilient design module of the MICOT.
Excise tax avoidance: the case of state cigarette taxes.
DeCicca, Philip; Kenkel, Donald; Liu, Feng
2013-12-01
We conduct an applied welfare economics analysis of cigarette tax avoidance. We develop an extension of the standard formula for the optimal Pigouvian corrective tax to incorporate the possibility that consumers avoid the tax by making purchases in nearby lower tax jurisdictions. To provide a key parameter for our formula, we estimate a structural endogenous switching regression model of border-crossing and cigarette prices. In illustrative calculations, we find that for many states, after taking into account tax avoidance the optimal tax is at least 20% smaller than the standard Pigouvian tax that simply internalizes external costs. Our empirical estimate that tax avoidance strongly responds to the price differential is the main reason for this result. We also use our results to examine the benefits of replacing avoidable state excise taxes with a harder-to-avoid federal excise tax on cigarettes. Copyright © 2013 Elsevier B.V. All rights reserved.
Excise Tax Avoidance: The Case of State Cigarette Taxes
DeCicca, Philip; Kenkel, Donald; Liu, Feng
2013-01-01
We conduct an applied welfare economics analysis of cigarette tax avoidance. We develop an extension of the standard formula for the optimal Pigouvian corrective tax to incorporate the possibility that consumers avoid the tax by making purchases in nearby lower-tax jurisdictions. To provide a key parameter for our formula, we estimate a structural endogenous switching regression model of border-crossing and cigarette prices. In illustrative calculations, we find that for many states, after taking into account tax avoidance the optimal tax is at least 20 percent smaller than the standard Pigouvian tax that simply internalizes external costs. Our empirical estimate that tax avoidance strongly responds to the price differential is the main reason for this result. We also use our results to examine the benefits of replacing avoidable state excise taxes with a harder-to-avoid federal excise tax on cigarettes. PMID:24140760
Timothy M. Young; James H. Perdue; Andy Hartsell; Robert C. Abt; Donald Hodges; Timothy G. Rials
2009-01-01
Optimal locations for biomass facilities that use mill residues are identified for 13 southern U.S. states. The Biomass Site Assessment Tool (BioSAT) model is used to identify the top 20 locations for 13 southern U.S. states. The trucking cost model of BioSAT is used with Timber Mart South 2009 price data to estimate the total cost, average cost, and marginal costs for...
Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States
2017-04-15
50 0 50 Singular Values Frequency (rad/s) S in g u la r V a lu e s ( d B ) controller . The non -output variables can be estimated by reliable linear...Contract # N00014-14-C-0004 Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States Progress Report...recovery of a VTOL UAV. There is a clear need for additional levels of stability and control augmentation and, ultimately, fully autonomous landing
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil
2007-01-01
Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533
Black holes are almost optimal quantum cloners
NASA Astrophysics Data System (ADS)
Adami, Christoph; Ver Steeg, Greg
2015-06-01
If black holes were able to clone quantum states, a number of paradoxes in black hole physics would disappear. However, the linearity of quantum mechanics forbids exact cloning of quantum states. Here we show that black holes indeed clone incoming quantum states with a fidelity that depends on the black hole’s absorption coefficient, without violating the no-cloning theorem because the clones are only approximate. Perfectly reflecting black holes are optimal universal ‘quantum cloning machines’ and operate on the principle of stimulated emission, exactly as their quantum optical counterparts. In the limit of perfect absorption, the fidelity of clones is only equal to what can be obtained via quantum state estimation methods. But for any absorption probability less than one, the cloning fidelity is nearly optimal as long as ω /T≥slant 10, a common parameter for modest-sized black holes.
Optimal multi-type sensor placement for response and excitation reconstruction
NASA Astrophysics Data System (ADS)
Zhang, C. D.; Xu, Y. L.
2016-01-01
The need to perform dynamic response reconstruction always arises as the measurement of structural response is often limited to a few locations, especially for a large civil structure. Besides, it is usually very difficult, if not impossible, to measure external excitations under the operation condition of a structure. This study presents an algorithm for optimal placement of multi-type sensors, including strain gauges, displacement transducers and accelerometers, for the best reconstruction of responses of key structural components where there are no sensors installed and the best estimation of external excitations acting on the structure at the same time. The algorithm is developed in the framework of Kalman filter with unknown excitation, in which minimum-variance unbiased estimates of the generalized state of the structure and the external excitations are obtained by virtue of limited sensor measurements. The structural responses of key locations without sensors can then be reconstructed with the estimated generalized state and excitation. The asymptotic stability feature of the filter is utilized for optimal sensor placement. The number and spatial location of the multi-type sensors are determined by adding the optimal sensor which gains the maximal reduction of the estimation error of reconstructed responses. For the given mode number in response reconstruction and the given locations of external excitations, the optimal multi-sensor placement achieved by the proposed method is independent of the type and time evolution of external excitation. A simply-supported overhanging steel beam under multiple types of excitation is numerically studied to demonstrate the feasibility and superiority of the proposed method, and the experimental work is then carried out to testify the effectiveness of the proposed method.
Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs
NASA Astrophysics Data System (ADS)
Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.
We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.
Are Cloned Quantum States Macroscopic?
NASA Astrophysics Data System (ADS)
Fröwis, F.; Dür, W.
2012-10-01
We study quantum states produced by optimal phase covariant quantum cloners. We argue that cloned quantum superpositions are not macroscopic superpositions in the spirit of Schrödinger’s cat, despite their large particle number. This is indicated by calculating several measures for macroscopic superpositions from the literature, as well as by investigating the distinguishability of the two superposed cloned states. The latter rapidly diminishes when considering imperfect detectors or noisy states and does not increase with the system size. In contrast, we find that cloned quantum states themselves are macroscopic, in the sense of both proposed measures and their usefulness in quantum metrology with an optimal scaling in system size. We investigate the applicability of cloned states for parameter estimation in the presence of different kinds of noise.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
Reliability optimization design of the gear modification coefficient based on the meshing stiffness
NASA Astrophysics Data System (ADS)
Wang, Qianqian; Wang, Hui
2018-04-01
Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.
Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver
2013-01-01
Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116
Adaptive hybrid optimal quantum control for imprecisely characterized systems.
Egger, D J; Wilhelm, F K
2014-06-20
Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.
NASA Astrophysics Data System (ADS)
Uzunoglu, B.; Hussaini, Y.
2017-12-01
Implicit Particle Filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability by an implicit step . It optimizes a nonlinear cost function which can be inherited from legacy assimilation routines . Dynamic state estimation for almost real-time applications in power systems are becomingly increasingly more important with integration of variable wind and solar power generation. New advanced state estimation tools that will replace the old generation state estimation in addition to having a general framework of complexities should be able to address the legacy software and able to integrate the old software in a mathematical framework while allowing the power industry need for a cautious and evolutionary change in comparison to a complete revolutionary approach while addressing nonlinearity and non-normal behaviour. This work implements implicit particle filter as a state estimation tool for the estimation of the states of a power system and presents the first implicit particle filter application study on a power system state estimation. The implicit particle filter is introduced into power systems and the simulations are presented for a three-node benchmark power system . The performance of the filter on the presented problem is analyzed and the results are presented.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
NASA Astrophysics Data System (ADS)
Salmin, Vadim V.
2017-01-01
Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.
Wolowacz, Sorrel E; Briggs, Andrew; Belozeroff, Vasily; Clarke, Philip; Doward, Lynda; Goeree, Ron; Lloyd, Andrew; Norman, Richard
Cost-utility models are increasingly used in many countries to establish whether the cost of a new intervention can be justified in terms of health benefits. Health-state utility (HSU) estimates (the preference for a given state of health on a cardinal scale where 0 represents dead and 1 represents full health) are typically among the most important and uncertain data inputs in cost-utility models. Clinical trials represent an important opportunity for the collection of health-utility data. However, trials designed primarily to evaluate efficacy and safety often present challenges to the optimal collection of HSU estimates for economic models. Careful planning is needed to determine which of the HSU estimates may be measured in planned trials; to establish the optimal methodology; and to plan any additional studies needed. This report aimed to provide a framework for researchers to plan the collection of health-utility data in clinical studies to provide high-quality HSU estimates for economic modeling. Recommendations are made for early planning of health-utility data collection within a research and development program; design of health-utility data collection during protocol development for a planned clinical trial; design of prospective and cross-sectional observational studies and alternative study types; and statistical analyses and reporting. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
The Community Line Source (C-LINE) modeling system estimates emissions and dispersion of toxic air pollutants for roadways within the continental United States. It accesses publicly available traffic and meteorological datasets, and is optimized for use on community-sized areas (...
Particulate Meso-scale Mechanics Diagnostics: Magnetic Sensors for Dynamic State Orientation
2013-12-01
Research Laboratory for giving this opportunity to work at the Eglin AFB, FL. REFERENCES 1. Brian D. Reding , “Development of Particulate Materials...Measurements," Sandia Report, 1984. 15. I.N. Tansel, B. Reding , W. L. Cooper, “Lagrangian Point State Estimation with Optimized, Redundant Induction Coil
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Optimizing Synchronization Stability of the Kuramoto Model in Complex Networks and Power Grids
NASA Astrophysics Data System (ADS)
Li, Bo; Wong, K. Y. Michael
Maintaining the stability of synchronization state is crucial for the functioning of many natural and artificial systems. For the Kuramoto model on general weighted networks, the synchronization stability, measured by the dominant Lyapunov exponent at the steady state, is shown to have intricate and nonlinear dependence on the network topology and the dynamical parameters. Specifically, the dominant Lyapunov exponent corresponds to the algebraic connectivity of a meta-graph whose edge weight depends nonlinearly on the steady states. In this study, we utilize the cut-set space (DC) approximation to estimate the nonlinear steady state and simplify the calculation of the stability measure, based on which we further derive efficient algorithms to optimize the synchronization stability. The properties of the optimized networks and application in power grid stability are also discussed. This work is supported by a Grant from the Research Grant Council of Hong Kong (Grant Numbers 605813 and 16322616).
Full State Feedback Control for Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Tillay
This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less
NASA Astrophysics Data System (ADS)
Tyson, Jon
2009-03-01
We prove a concise factor-of-2 estimate for the failure rate of optimally distinguishing an arbitrary ensemble of mixed quantum states, generalizing work of Holevo [Theor. Probab. Appl. 23, 411 (1978)] and Curlander [Ph.D. Thesis, MIT, 1979]. A modification to the minimal principle of Cocha and Poor [Proceedings of the 6th International Conference on Quantum Communication, Measurement, and Computing (Rinton, Princeton, NJ, 2003)] is used to derive a suboptimal measurement which has an error rate within a factor of 2 of the optimal by construction. This measurement is quadratically weighted and has appeared as the first iterate of a sequence of measurements proposed by Ježek et al. [Phys. Rev. A 65, 060301 (2002)]. Unlike the so-called pretty good measurement, it coincides with Holevo's asymptotically optimal measurement in the case of nonequiprobable pure states. A quadratically weighted version of the measurement bound by Barnum and Knill [J. Math. Phys. 43, 2097 (2002)] is proven. Bounds on the distinguishability of syndromes in the sense of Schumacher and Westmoreland [Phys. Rev. A 56, 131 (1997)] appear as a corollary. An appendix relates our bounds to the trace-Jensen inequality.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.; Newsom, J. R.; Abel, I.
1980-01-01
A direct method of synthesizing a low-order optimal feedback control law for a high order system is presented. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean square steady state responses and control inputs. The controller is shown to be equivalent to a partial state estimator. The method is applied to the problem of active flutter suppression. Numerical results are presented for a 20th order system representing an aeroelastic wind-tunnel wing model. Low-order controllers (fourth and sixth order) are compared with a full order (20th order) optimal controller and found to provide near optimal performance with adequate stability margins.
Application of wavelet-based multi-model Kalman filters to real-time flood forecasting
NASA Astrophysics Data System (ADS)
Chou, Chien-Ming; Wang, Ru-Yih
2004-04-01
This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
Implicit methods for efficient musculoskeletal simulation and optimal control
van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter
2011-01-01
The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983
Optimal Level of Expenditure to Control the Southern Pine Beetle
Joseph E. de Steiguer; Roy L. Hedden; John M. Pye
1987-01-01
Optimal level of expenditure to control damage to commercial timber stands by the southern pine beetle was determined by models that simulated and analyzed beetle attacks during a typical season for 11 Southern States. At a real discount rate of 4 percent, maximized net benefits for the Southern region are estimated at about $50 million; at 10 percent, more than $30...
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jinghe; Welch, Greg; Bishop, Gary
2014-04-01
As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practicemore » the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.« less
Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.
Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha
2017-03-01
This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
Single-shot quantum state estimation via a continuous measurement in the strong backaction regime
NASA Astrophysics Data System (ADS)
Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.
2014-09-01
We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Šimůnek, Jirka; Nimmo, John R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.
Three-dimensional modeling, estimation, and fault diagnosis of spacecraft air contaminants.
Narayan, A P; Ramirez, W F
1998-01-01
A description is given of the design and implementation of a method to track the presence of air contaminants aboard a spacecraft using an accurate physical model and of a procedure that would raise alarms when certain tolerance levels are exceeded. Because our objective is to monitor the contaminants in real time, we make use of a state estimation procedure that filters measurements from a sensor system and arrives at an optimal estimate of the state of the system. The model essentially consists of a convection-diffusion equation in three dimensions, solved implicitly using the principle of operator splitting, and uses a flowfield obtained by the solution of the Navier-Stokes equations for the cabin geometry, assuming steady-state conditions. A novel implicit Kalman filter has been used for fault detection, a procedure that is an efficient way to track the state of the system and that uses the sparse nature of the state transition matrices.
Learning to Estimate Dynamical State with Probabilistic Population Codes.
Makin, Joseph G; Dichter, Benjamin K; Sabes, Philip N
2015-11-01
Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.
Learning to Estimate Dynamical State with Probabilistic Population Codes
Sabes, Philip N.
2015-01-01
Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. PMID:26540152
Degradation trend estimation of slewing bearing based on LSSVM model
NASA Astrophysics Data System (ADS)
Lu, Chao; Chen, Jie; Hong, Rongjing; Feng, Yang; Li, Yuanyuan
2016-08-01
A novel prediction method is proposed based on least squares support vector machine (LSSVM) to estimate the slewing bearing's degradation trend with small sample data. This method chooses the vibration signal which contains rich state information as the object of the study. Principal component analysis (PCA) was applied to fuse multi-feature vectors which could reflect the health state of slewing bearing, such as root mean square, kurtosis, wavelet energy entropy, and intrinsic mode function (IMF) energy. The degradation indicator fused by PCA can reflect the degradation more comprehensively and effectively. Then the degradation trend of slewing bearing was predicted by using the LSSVM model optimized by particle swarm optimization (PSO). The proposed method was demonstrated to be more accurate and effective by the whole life experiment of slewing bearing. Therefore, it can be applied in engineering practice.
NASA Astrophysics Data System (ADS)
Cioaca, Alexandru
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Nonlinear Symplectic Attitude Estimation for Small Satellites
2006-08-01
Vol. 45, No. 3, 2000, pp. 477-482. 7 Gelb, A., editor, Applied Optimal Estimation, The M.I.T. Press, Cambridge, MA, 1974. ’ Brown , R. G. and Hwang , P. Y...demonstrate orders of magnitude improvement in state and constants of motion estimation when compared to extended and iterative Kalman methods...satellites have fallen into the former category, including the ubiquitous Extended Kalman Filter (EKF).2 - 9 While this approach has been used
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
Co-state initialization for the minimum-time low-thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya
2017-05-01
This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Air Force Research Laboratory Resident Associateship Program Continuation
2014-12-04
2011-7/17/2012 United States Received Veremyev, Alexander Fedorovich Pasiliao, Eduardo Lewis 8/1/2012-7/31/2013 Russia Sensors Directorate Aga...mass and damping on their modal characteristics. 5 Aerodynamic loads were estimated from the wind -tunnel test data, where the angle of attack of the... Wireless Networks; Throughput Optimization for Cognitive Radio Network with Slowly Varying Channels. 2 Capacity Optimization of MIMO Links with
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
HW Buck for Windows: the optimal hardwood log bucking decision simulator with expanded capabilities
James B. Pickens; Scott Noble; Blair Orr; Philip A. Araman; John E. Baumgras; Al Steele
2006-01-01
It has long been recognized that inappropriate placement of crosscuts when manufacturing hardwood logs from harvested stems (log bucking) reduces the value of logs produced. Recent studies have estimated losses in the range from 28% to 38% in the lake states region. These estimates were developed by evaluating the bucking cuts chosen by harvesting crews and comparing...
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Composing problem solvers for simulation experimentation: a case study on steady state estimation.
Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M
2014-01-01
Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
Quantum state tomography and fidelity estimation via Phaselift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yiping; Liu, Huan; Zhao, Qing, E-mail: qzhaoyuping@bit.edu.cn
Experiments of multi-photon entanglement have been performed by several groups. Obviously, an increase on the photon number for fidelity estimation and quantum state tomography causes a dramatic increase in the elements of the positive operator valued measures (POVMs), which results in a great consumption of time in measurements. In practice, we wish to obtain a good estimation of fidelity and quantum states through as few measurements as possible for multi-photon entanglement. Phaselift provides such a chance to estimate fidelity for entangling states based on less data. In this paper, we would like to show how the Phaselift works for sixmore » qubits in comparison to the data given by Pan’s group, i.e., we use a fraction of the data as input to estimate the rest of the data through the obtained density matrix, and thus goes beyond the simple fidelity analysis. The fidelity bound is also provided for general Schrödinger Cat state. Based on the fidelity bound, we propose an optimal measurement approach which could both reduce the copies and keep the fidelity bound gap small. The results demonstrate that the Phaselift can help decrease the measured elements of POVMs for six qubits. Our conclusion is based on the prior knowledge that a pure state is the target state prepared by experiments.« less
Nonlinear model predictive control for chemical looping process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Abhinaya; Lei, Hao; Lou, Xinsheng
A control system for optimizing a chemical looping ("CL") plant includes a reduced order mathematical model ("ROM") that is designed by eliminating mathematical terms that have minimal effect on the outcome. A non-linear optimizer provides various inputs to the ROM and monitors the outputs to determine the optimum inputs that are then provided to the CL plant. An estimator estimates the values of various internal state variables of the CL plant. The system has one structure adapted to control a CL plant that only provides pressure measurements in the CL loops A and B, a second structure adapted to amore » CL plant that provides pressure measurements and solid levels in both loops A, and B, and a third structure adapted to control a CL plant that provides full information on internal state variables. A final structure provides a neural network NMPC controller to control operation of loops A and B.« less
NASA Technical Reports Server (NTRS)
Andrews, J.
1977-01-01
An optimal decision model of crop production, trade, and storage was developed for use in estimating the economic consequences of improved forecasts and estimates of worldwide crop production. The model extends earlier distribution benefits models to include production effects as well. Application to improved information systems meeting the goals set in the large area crop inventory experiment (LACIE) indicates annual benefits to the United States of $200 to $250 million for wheat, $50 to $100 million for corn, and $6 to $11 million for soybeans, using conservative assumptions on expected LANDSAT system performance.
Quantifying How Observations Inform a Numerical Reanalysis of Hawaii
NASA Astrophysics Data System (ADS)
Powell, B. S.
2017-11-01
When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.
NASA Technical Reports Server (NTRS)
Broussard, John R.
1987-01-01
Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.
Sequential estimation and satellite data assimilation in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Ghil, M.
1986-01-01
The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Tankersley, Claude D.
1994-05-01
Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.
Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li
2018-05-01
This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.
Optimal space-time attacks on system state estimation under a sparsity constraint
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Niu, Ruixin; Han, Puxiao
2016-05-01
System state estimation in the presence of an adversary that injects false information into sensor readings has attracted much attention in wide application areas, such as target tracking with compromised sensors, secure monitoring of dynamic electric power systems, secure driverless cars, and radar tracking and detection in the presence of jammers. From a malicious adversary's perspective, the optimal strategy for attacking a multi-sensor dynamic system over sensors and over time is investigated. It is assumed that the system defender can perfectly detect the attacks and identify and remove sensor data once they are corrupted by false information injected by the adversary. With this in mind, the adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under a sparse attack constraint such that the adversary can only attack the system a few times over time and over sensors. The sparsity assumption is due to the adversary's limited resources and his/her intention to reduce the chance of being detected by the system defender. This becomes an integer programming problem and its optimal solution, the exhaustive search, is intractable with a prohibitive complexity, especially for a system with a large number of sensors and over a large number of time steps. Several suboptimal solutions, such as those based on greedy search and dynamic programming are proposed to find the attack strategies. Examples and numerical results are provided in order to illustrate the effectiveness and the reduced computational complexities of the proposed attack strategies.
Improved Battery State Estimation Using Novel Sensing Techniques
NASA Astrophysics Data System (ADS)
Abdul Samad, Nassim
Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e.g. temperature, utilization, capacity fade, and cost) while downsizing and shifting the nominal operating SOC is demonstrated via simulations. The contributions in this thesis aim to make EVs, HEVs and PHEVs less costly while maintaining safety and reliability as more people are transitioning towards more environmentally friendly means of transportation.
NASA Astrophysics Data System (ADS)
Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai
2013-09-01
In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Estimation of temperature in micromaser-type systems
NASA Astrophysics Data System (ADS)
Farajollahi, B.; Jafarzadeh, M.; Rangani Jahromi, H.; Amniat-Talab, M.
2018-06-01
We address the estimation of the number of photons and temperature in a micromaser-type system with Fock state and thermal fields. We analyze the behavior of the quantum Fisher information (QFI) for both fields. In particular, we show that in the Fock state field model, the QFI for non-entangled initial state of the atoms increases monotonously with time, while for entangled initial state of the atoms, it shows oscillatory behavior, leading to non-Markovian dynamics. Moreover, it is observed that the QFI, entropy of entanglement and fidelity have collapse and revival behavior. Focusing on each period that the collapses and revivals occur, we see that the optimal points of the QFI and entanglement coincide. In addition, when one of the subsystems evolved state fidelity becomes maximum, the QFI also achieves its maximum. We also address the evolved fidelity versus the initial state as a good witness of non-Markovianity. Moreover, we interestingly find that the entropy of the composite system can be used as a witness of non-Markovian evolution of the subsystems. For the thermal field model, we similarly investigate the relation among the QFI associated with the temperature, von Neumann entropy, and fidelity. In particular, it is found that at the instants when the maximum values of the QFI are achieved, the entanglement between the two-qubit system and the environment is maximized while the entanglement between the probe and its environment is minimized. Moreover, we show that the thermometry may lead to optimal estimation of practical temperatures. Besides, extending our computation to the two-qubit system, we find that using a two-qubit probe generally leads to more effective estimation than the one-qubit scenario. Finally, we show that initial state entanglement plays a key role in the advent of non-Markovianity and determination of its strength in the composite system and its subsystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo
We analyze quantum algorithms for cloning of a quantum measurement. Our aim is to mimic two uses of a device performing an unknown von Neumann measurement with a single use of the device. When the unknown device has to be used before the bipartite state to be measured is available we talk about 1{yields}2 learning of the measurement, otherwise the task is called 1{yields}2 cloning of a measurement. We perform the optimization for both learning and cloning for arbitrary dimension d of the Hilbert space. For 1{yields}2 cloning we also propose a simple quantum network that achieves the optimal fidelity.more » The optimal fidelity for 1{yields}2 learning just slightly outperforms the estimate and prepare strategy in which one first estimates the unknown measurement and depending on the result suitably prepares the duplicate.« less
Near-optimal quantum circuit for Grover's unstructured search using a transverse field
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui
2017-06-01
Inspired by a class of algorithms proposed by Farhi et al. (arXiv:1411.4028), namely, the quantum approximate optimization algorithm (QAOA), we present a circuit-based quantum algorithm to search for a needle in a haystack, obtaining the same quadratic speedup achieved by Grover's original algorithm. In our algorithm, the problem Hamiltonian (oracle) and a transverse field are applied alternately to the system in a periodic manner. We introduce a technique, based on spin-coherent states, to analyze the composite unitary in a single period. This composite unitary drives a closed transition between two states that have high degrees of overlap with the initial state and the target state, respectively. The transition rate in our algorithm is of order Θ (1 /√{N }) , and the overlaps are of order Θ (1 ) , yielding a nearly optimal query complexity of T ≃√{N }(π /2 √{2 }) . Our algorithm is a QAOA circuit that demonstrates a quantum advantage with a large number of iterations that is not derived from Trotterization of an adiabatic quantum optimization (AQO) algorithm. It also suggests that the analysis required to understand QAOA circuits involves a very different process from estimating the energy gap of a Hamiltonian in AQO.
Enhancing Data Assimilation by Evolutionary Particle Filter and Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Moradkhani, H.; Abbaszadeh, P.; Yan, H.
2016-12-01
Particle Filters (PFs) have received increasing attention by the researchers from different disciplines in hydro-geosciences as an effective method to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation by means of data assimilation in hydrology and geoscience has evolved since 2005 from SIR-PF to PF-MCMC and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC. In this framework, the posterior distribution undergoes an evolutionary process to update an ensemble of prior states that more closely resemble realistic posterior probability distribution. The premise of this approach is that the particles move to optimal position using the GA optimization coupled with MCMC increasing the number of effective particles, hence the particle degeneracy is avoided while the particle diversity is improved. The proposed algorithm is applied on a conceptual and highly nonlinear hydrologic model and the effectiveness, robustness and reliability of the method in jointly estimating the states and parameters and also reducing the uncertainty is demonstrated for few river basins across the United States.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1997-01-01
This paper discusses the calculation of sensitivities. or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the first and second derivatives associated with such problems and the relation of these derivatives to implicit differentiation and equality constrained optimization. We also outline an error analysis of the analytical formulae and compare the results with similar results for finite-difference estimates of derivatives. We then attend to an investigation of the nature of the adjoint method and the adjoint equations and their relation to directions of steepest descent. We illustrate the points discussed with an optimization problem in which the variables are the coefficients in a differential operator.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
Effects of loss on the phase sensitivity with parity detection in an SU(1,1) interferometer
NASA Astrophysics Data System (ADS)
Li, Dong; Yuan, Chun-Hua; Yao, Yao; Jiang, Wei; Li, Mo; Zhang, Weiping
2018-05-01
We theoretically study the effects of loss on the phase sensitivity of an SU(1,1) interferometer with parity detection with various input states. We show that although the sensitivity of phase estimation decreases in the presence of loss, it can still beat the shot-noise limit with small loss. To examine the performance of parity detection, the comparison is performed among homodyne detection, intensity detection, and parity detection. Compared with homodyne detection and intensity detection, parity detection has a slight better optimal phase sensitivity in the absence of loss, but has a worse optimal phase sensitivity with a significant amount of loss with one-coherent state or coherent $\\otimes$ squeezed state input.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
Cyber-Physical Attacks With Control Objectives
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-08-18
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Cyber-Physical Attacks With Control Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method
NASA Astrophysics Data System (ADS)
Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2009-10-01
Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.
NASA Astrophysics Data System (ADS)
Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay
2015-01-01
In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.
A brief review on key technologies in the battery management system of electric vehicles
NASA Astrophysics Data System (ADS)
Liu, Kailong; Li, Kang; Peng, Qiao; Zhang, Cheng
2018-04-01
Batteries have been widely applied in many high-power applications, such as electric vehicles (EVs) and hybrid electric vehicles, where a suitable battery management system (BMS) is vital in ensuring safe and reliable operation of batteries. This paper aims to give a brief review on several key technologies of BMS, including battery modelling, state estimation and battery charging. First, popular battery types used in EVs are surveyed, followed by the introduction of key technologies used in BMS. Various battery models, including the electric model, thermal model and coupled electro-thermal model are reviewed. Then, battery state estimations for the state of charge, state of health and internal temperature are comprehensively surveyed. Finally, several key and traditional battery charging approaches with associated optimization methods are discussed.
Experimental demonstration of real-time adaptive one-qubit quantum-state tomography
NASA Astrophysics Data System (ADS)
Yin, Qi; Li, Li; Xiang, Xiao; Xiang, Guo-Yong; Li, Chuang-Feng; Guo, Guang-Can
2017-01-01
Quantum-state tomography plays a pivotal role in quantum computation and information processing. To improve the accuracy in estimating an unknown state, carefully designed measurement schemes, such as adopting an adaptive strategy, are necessarily needed, which have gained great interest recently. In this work, based on the proposal of Sugiyama et al. [Phys. Rev. A 85, 052107 (2012)], 10.1103/PhysRevA.85.052107, we experimentally realize an adaptive quantum-state tomography for one qubit in an optical system. Since this scheme gives an analytical solution to the optimal measurement basis problem, our experiment is updated in real time and the infidelity between the real state and the estimated state is tracked with the detected photons. We observe an almost 1 /N scaling rule of averaged infidelity against the overall number of photons, N , in our experiment, which outperforms 1 /√{N } of nonadaptive schemes.
Stochastic Adaptive Estimation and Control.
1994-10-26
Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Miranowicz, Adam
2012-02-01
We study state-dependent quantum cloning that can outperform universal cloning (UC). This is possible by using some a priori information on a given quantum state to be cloned. Specifically, we propose a generalization and optical implementation of quantum optimal mirror phase-covariant cloning, which refers to optimal cloning of sets of qubits of known modulus of the expectation value of Pauli's Z operator. Our results can be applied to cloning of an arbitrary mirror-symmetric distribution of qubits on the Bloch sphere including in special cases UC and phase-covariant cloning. We show that the cloning is optimal by adapting our former optimality proof for axisymmetric cloning (Bartkiewicz and Miranowicz 2010 Phys. Rev. A 82 042330). Moreover, we propose an optical realization of the optimal mirror phase-covariant 1→2 cloning of a qubit, for which the mean probability of successful cloning varies from 1/6 to 1/3 depending on prior information on the set of qubits to be cloned. The qubits are represented by polarization states of photons generated by the type-I spontaneous parametric down-conversion. The scheme is based on the interference of two photons on an unbalanced polarization-dependent beam splitter with different splitting ratios for vertical and horizontal polarization components and the additional application of feedforward by means of Pockels cells. The experimental feasibility of the proposed setup is carefully studied including various kinds of imperfections and losses. Moreover, we briefly describe two possible cryptographic applications of the optimal mirror phase-covariant cloning corresponding to state discrimination (or estimation) and secure quantum teleportation.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Local Estimators for Spacecraft Formation Flying
NASA Technical Reports Server (NTRS)
Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh
2011-01-01
A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.
A Minimum Fuel Based Estimator for Maneuver and Natrual Dynamics Reconstruction
NASA Astrophysics Data System (ADS)
Lubey, D.; Scheeres, D.
2013-09-01
The vast and growing population of objects in Earth orbit (active and defunct spacecraft, orbital debris, etc.) offers many unique challenges when it comes to tracking these objects and associating the resulting observations. Complicating these challenges are the inaccurate natural dynamical models of these objects, the active maneuvers of spacecraft that deviate them from their ballistic trajectories, and the fact that spacecraft are tracked and operated by separate agencies. Maneuver detection and reconstruction algorithms can help with each of these issues by estimating mismodeled and unmodeled dynamics through indirect observation of spacecraft. It also helps to verify the associations made by an object correlation algorithm or aid in making those associations, which is essential when tracking objects in orbit. The algorithm developed in this study applies an Optimal Control Problem (OCP) Distance Metric approach to the problems of Maneuver Reconstruction and Dynamics Estimation. This was first developed by Holzinger, Scheeres, and Alfriend (2011), with a subsequent study by Singh, Horwood, and Poore (2012). This method estimates the minimum fuel control policy rather than the state as a typical Kalman Filter would. This difference ensures that the states are connected through a given dynamical model and allows for automatic covariance manipulation, which can help to prevent filter saturation. Using a string of measurements (either verified or hypothesized to correlate with one another), the algorithm outputs a corresponding string of adjoint and state estimates with associated noise. Post-processing techniques are implemented, which when applied to the adjoint estimates can remove noise and expose unmodeled maneuvers and mismodeled natural dynamics. Specifically, the estimated controls are used to determine spacecraft dependent accelerations (atmospheric drag and solar radiation pressure) using an adapted form of the Optimal Control based natural dynamics estimation scheme developed by Lubey and Scheeres (2012). In order to allow for direct comparison, the estimator developed here was modeled after a typical Kalman Filter. The estimator forces the terminal state to lie on a manifold that satisfies the least squares with a priori information cost function, thus establishing a link with a typical Kalman filter. Terms are collected into a pseudo-Kalman Gain, which creates an equivalent form in the state estimates and covariances between the two estimators. While the two estimators share common roots, the inclusion of control in the Minimum Fuel Estimator gives it special properties. For instance, the inclusion of adjoint noise can help to automatically prevent filter saturation in a manner similar to a State Noise Compensation Algorithm. This property is quite important when considering dynamics mismodeling as filter saturation will cause estimate divergence for mismodeled systems. Additional properties and alternative forms of the estimator are also explored in this study. Several implementations of this estimator are given in this paper. It is applied to LEO, GEO, and GTO orbits with drag and SRP mismodeling. The inclusion of unmodeled maneuvers is also considered. These numerical simulations verify the mathematical properties of this estimator, and demonstrate the advantages that this estimator has over typical Kalman Filters.
Tire-road friction estimation and traction control strategy for motorized electric vehicle.
Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).
Tire-road friction estimation and traction control strategy for motorized electric vehicle
Jin, Li-Qiang; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053
Conditioning of Model Identification Task in Immune Inspired Optimizer SILO
NASA Astrophysics Data System (ADS)
Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.
2009-10-01
Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Optimizing Distribution of Pandemic Influenza Antiviral Drugs
Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.
2015-01-01
We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858
Large-angle slewing maneuvers for flexible spacecraft
NASA Technical Reports Server (NTRS)
Chun, Hon M.; Turner, James D.
1988-01-01
A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.
Quantum demultiplexer of quantum parameter-estimation information in quantum networks
NASA Astrophysics Data System (ADS)
Xie, Yanqing; Huang, Yumeng; Wu, Yinzhong; Hao, Xiang
2018-05-01
The quantum demultiplexer is constructed by a series of unitary operators and multipartite entangled states. It is used to realize information broadcasting from an input node to multiple output nodes in quantum networks. The scheme of quantum network communication with respect to phase estimation is put forward through the demultiplexer subjected to amplitude damping noises. The generalized partial measurements can be applied to protect the transferring efficiency from environmental noises in the protocol. It is found out that there are some optimal coherent states which can be prepared to enhance the transmission of phase estimation. The dynamics of state fidelity and quantum Fisher information are investigated to evaluate the feasibility of the network communication. While the state fidelity deteriorates rapidly, the quantum Fisher information can be enhanced to a maximum value and then decreases slowly. The memory effect of the environment induces the oscillations of fidelity and quantum Fisher information. The adjustment of the strength of partial measurements is helpful to increase quantum Fisher information.
Minimax estimation of qubit states with Bures risk
NASA Astrophysics Data System (ADS)
Acharya, Anirudh; Guţă, Mădălin
2018-04-01
The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.
Weak-value amplification as an optimal metrological protocol
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Escher, B. M.; de Matos Filho, R. L.; Zagury, N.; Davidovich, L.
2015-06-01
The implementation of weak-value amplification requires the pre- and postselection of states of a quantum system, followed by the observation of the response of the meter, which interacts weakly with the system. Data acquisition from the meter is conditioned to successful postselection events. Here we derive an optimal postselection procedure for estimating the coupling constant between system and meter and show that it leads both to weak-value amplification and to the saturation of the quantum Fisher information, under conditions fulfilled by all previously reported experiments on the amplification of weak signals. For most of the preselected states, full information on the coupling constant can be extracted from the meter data set alone, while for a small fraction of the space of preselected states, it must be obtained from the postselection statistics.
Variational optical flow estimation based on stick tensor voting.
Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec
2013-07-01
Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.
Some Considerations on the Problem of Non-Steady State Traffic Flow Optimization
DOT National Transportation Integrated Search
2007-01-01
Poor traffic signal timing accounts for an estimated 10 percent of all traffic delay about 300 million vehicle-hours on major roadways alone. Americans agree that this is a problem: one U.S. Department of Transportation (DOT) survey found tha...
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
NASA Astrophysics Data System (ADS)
Iny, David
2007-09-01
This paper addresses the out-of-sequence measurement (OOSM) problem associated with multiple platform tracking systems. The problem arises due to different transmission delays in communication of detection reports across platforms. Much of the literature focuses on the improvement to the state estimate by incorporating the OOSM. As the time lag increases, there is diminishing improvement to the state estimate. However, this paper shows that optimal processing of OOSMs may still be beneficial by improving data association as part of a multi-target tracker. This paper derives exact multi-lag algorithms with the property that the standard log likelihood track scoring is independent of the order in which the measurements are processed. The orthogonality principle is applied to generalize the method of Bar- Shalom in deriving the exact A1 algorithm for 1-lag estimation. Theory is also developed for optimal filtering of time averaged measurements and measurements correlated through periodic updates of a target aim-point. An alternative derivation of the multi-lag algorithms is also achieved using an efficient variant of the augmented state Kalman filter (AS-KF). This results in practical and reasonably efficient multi-lag algorithms. Results are compared to a well known ad hoc algorithm for incorporating OOSMs. Finally, the paper presents some simulated multi-target multi-static scenarios where there is a benefit to processing the data out of sequence in order to improve pruning efficiency.
Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters
NASA Astrophysics Data System (ADS)
Worthy, J. L., III; Holzinger, M. J.
Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data
NASA Astrophysics Data System (ADS)
Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.
Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.
2017-01-01
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494
Shanechi, Maryam M.; Williams, Ziv M.; Wornell, Gregory W.; Hu, Rollin C.; Powers, Marissa; Brown, Emery N.
2013-01-01
Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system. PMID:23593130
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; ...
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Channel Simulation in Quantum Metrology
NASA Astrophysics Data System (ADS)
Laurenza, Riccardo; Lupo, Cosmo; Spedalieri, Gaetana; Braunstein, Samuel L.; Pirandola, Stefano
2018-04-01
In this review we discuss how channel simulation can be used to simplify the most general protocols of quantum parameter estimation, where unlimited entanglement and adaptive joint operations may be employed. Whenever the unknown parameter encoded in a quantum channel is completely transferred in an environmental program state simulating the channel, the optimal adaptive estimation cannot beat the standard quantum limit. In this setting, we elucidate the crucial role of quantum teleportation as a primitive operation which allows one to completely reduce adaptive protocols over suitable teleportation-covariant channels and derive matching upper and lower bounds for parameter estimation. For these channels,wemay express the quantum Cramér Rao bound directly in terms of their Choi matrices. Our review considers both discrete- and continuous-variable systems, also presenting some new results for bosonic Gaussian channels using an alternative sub-optimal simulation. It is an open problem to design simulations for quantum channels that achieve the Heisenberg limit.
Continuous-variable quantum probes for structured environments
NASA Astrophysics Data System (ADS)
Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.
2018-01-01
We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.
Baseline estimation in flame's spectra by using neural networks and robust statistics
NASA Astrophysics Data System (ADS)
Garces, Hugo; Arias, Luis; Rojas, Alejandro
2014-09-01
This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.
Coal Thickness Gauging Using Elastic Waves
NASA Technical Reports Server (NTRS)
Nazarian, Soheil; Bar-Cohen, Yoseph
1999-01-01
The efforts of a mining crew can be optimized, if the thickness of the coal layers to be excavated is known before excavation. Wave propagation techniques can be used to estimate the thickness of the layer based on the contrast in the wave velocity between coal and rock beyond it. Another advantage of repeated wave measurement is that the state of the stress within the mine can be estimated. The state of the stress can be used in many safety-related decisions made during the operation of the mine. Given these two advantages, a study was carried out to determine the feasibility of the methodology. The results are presented herein.
A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Grunberg, D. B.
1986-01-01
A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.
Towards using musculoskeletal models for intelligent control of physically assistive robots.
Carmichael, Marc G; Liu, Dikai
2011-01-01
With the increasing number of robots being developed to physically assist humans in tasks such as rehabilitation and assistive living, more intelligent and personalized control systems are desired. In this paper we propose the use of a musculoskeletal model to estimate the strength of the user, from which information can be utilized to improve control schemes in which robots physically assist humans. An optimization model is developed utilizing a musculoskeletal model to estimate human strength in a specified dynamic state. Results of this optimization as well as methods of using it to observe muscle-based weaknesses in task space are presented. Lastly potential methods and problems in incorporating this model into a robot control system are discussed.
Study on UKF based federal integrated navigation for high dynamic aviation
NASA Astrophysics Data System (ADS)
Zhao, Gang; Shao, Wei; Chen, Kai; Yan, Jie
2011-08-01
High dynamic aircraft is a very attractive new generation vehicles, in which provides near space aviation with large flight envelope both speed and altitude, for example the hypersonic vehicles. The complex flight environments for high dynamic vehicles require high accuracy and stability navigation scheme. Since the conventional Strapdown Inertial Navigation System (SINS) and Global Position System (GPS) federal integrated scheme based on EKF (Extended Kalman Filter) is invalidation in GPS single blackout situation because of high speed flight, a new high precision and stability integrated navigation approach is presented in this paper, in which the SINS, GPS and Celestial Navigation System (CNS) is combined as a federal information fusion configuration based on nonlinear Unscented Kalman Filter (UKF) algorithm. Firstly, the new integrated system state error is modeled. According to this error model, the SINS system is used as the navigation solution mathematic platform. The SINS combine with GPS constitute one error estimation filter subsystem based on UKF to obtain local optimal estimation, and the SINS combine with CNS constitute another error estimation subsystem. A non-reset federated configuration filter based on partial information is proposed to fuse two local optimal estimations to get global optimal error estimation, and the global optimal estimation is used to correct the SINS navigation solution. The χ 2 fault detection method is used to detect the subsystem fault, and the fault subsystem is isolation through fault interval to protect system away from the divergence. The integrated system takes advantages of SINS, GPS and CNS to an immense improvement for high accuracy and reliably high dynamic navigation application. Simulation result shows that federated fusion of using GPS and CNS to revise SINS solution is reasonable and availably with good estimation performance, which are satisfied with the demands of high dynamic flight navigation. The UKF is superior than EKF based integrated scheme, in which has smaller estimation error and quickly convergence rate.
A translational platform for prototyping closed-loop neuromodulation systems
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2013-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders. PMID:23346048
A translational platform for prototyping closed-loop neuromodulation systems.
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2012-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.
Ting, T. O.; Lim, Eng Gee
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041
On sequential data assimilation for scalar macroscopic traffic flow models
NASA Astrophysics Data System (ADS)
Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel
2012-09-01
We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases
NASA Astrophysics Data System (ADS)
Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto
2017-09-01
A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.
Reaching the Quantum Cramér-Rao Bound for Transmission Measurements
NASA Astrophysics Data System (ADS)
Woodworth, Timothy; Chan, Kam Wai Clifford; Marino, Alberto
2017-04-01
The quantum Cramér-Rao bound (QCRB) is commonly used to quantify the lower bound for the uncertainty in the estimation of a given parameter. Here, we calculate the QCRB for transmission measurements of an optical system probed by a beam of light. Estimating the transmission of an optical element is important as it is required for the calibration of optimal states for interferometers, characterization of high efficiency photodetectors, or as part of other measurements, such as those in plasmonic sensors or in ellipsometry. We use a beam splitter model for the losses introduced by the optical system to calculate the QCRB for different input states. We compare the bound for a coherent state, a two-mode squeezed-state (TMSS), a single-mode squeezed-state (SMSS), and a Fock state and show that it is possible to obtain an ultimate lower bound, regardless of the state used to probe the system. We prove that the Fock state gives the lowest possible uncertainty in estimating the transmission for any state and demonstrate that the TMSS and SMSS approach this ultimate bound for large levels of squeezing. Finally, we show that a simple measurement strategy for the TMSS, namely an intensity difference measurement, is able to saturate the QCRB. Work supported by the W.M. Keck Foundation.
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, P.; Bhattacharyya, D.; Turton, R.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, P.; Bhattacharyya, D.; Turton, R.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less
A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu
We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less
A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe
NASA Astrophysics Data System (ADS)
Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy
2017-08-01
We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.
2010-03-01
uses all available resources in some optimized manner. By further exploiting the design flexibility and computational efficiency of Orthogonal Frequency...in the following sections. 3.2.1 Estimation of PU Signal Statistics. The Estimate PU Signal Statis- tics function of Fig 3.4 is used to compute the...consecutive PU transmissions, and 4) the probability of transitioning from one transmission state to another. These statistics are then used to compute the
Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-01-01
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868
Motion field estimation for a dynamic scene using a 3D LiDAR.
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-09-09
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
Bootstrapping on Undirected Binary Networks Via Statistical Mechanics
NASA Astrophysics Data System (ADS)
Fushing, Hsieh; Chen, Chen; Liu, Shan-Yu; Koehl, Patrice
2014-09-01
We propose a new method inspired from statistical mechanics for extracting geometric information from undirected binary networks and generating random networks that conform to this geometry. In this method an undirected binary network is perceived as a thermodynamic system with a collection of permuted adjacency matrices as its states. The task of extracting information from the network is then reformulated as a discrete combinatorial optimization problem of searching for its ground state. To solve this problem, we apply multiple ensembles of temperature regulated Markov chains to establish an ultrametric geometry on the network. This geometry is equipped with a tree hierarchy that captures the multiscale community structure of the network. We translate this geometry into a Parisi adjacency matrix, which has a relative low energy level and is in the vicinity of the ground state. The Parisi adjacency matrix is then further optimized by making block permutations subject to the ultrametric geometry. The optimal matrix corresponds to the macrostate of the original network. An ensemble of random networks is then generated such that each of these networks conforms to this macrostate; the corresponding algorithm also provides an estimate of the size of this ensemble. By repeating this procedure at different scales of the ultrametric geometry of the network, it is possible to compute its evolution entropy, i.e. to estimate the evolution of its complexity as we move from a coarse to a fine description of its geometric structure. We demonstrate the performance of this method on simulated as well as real data networks.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
DOT National Transportation Integrated Search
2016-12-01
A reoccurring challenge with increasing fuel prices is optimization of multi- and inter-modal freight transport to move products most efficiently. Projections for the future of agriculture in the United States (U.S.) combined with regional climate mo...
Economic impacts of climate change on water resources in the coterminous United States
A national-scale simulation-optimization model was created to generate estimates of economic impacts associated with changes in water supply and demand as influenced by climate change. Water balances were modeled for the 99 assessment sub-regions, and are presented for 18 water r...
Profitability and risk analysis of soybean planting date by maturity group
USDA-ARS?s Scientific Manuscript database
Limited knowledge exists on estimated soybean yield response to planting date to determine the profit-maximizing planting date for soybean production by maturity group (MG) in the southern United States. Furthermore, determining the optimal MG and crop insurance coverage level that is preferred by r...
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
Sequential quantum cloning under real-life conditions
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Mardoukhi, Yousof
2012-05-01
We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.
Treatment of systematic errors in land data assimilation systems
NASA Astrophysics Data System (ADS)
Crow, W. T.; Yilmaz, M.
2012-12-01
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.
A global carbon assimilation system based on a dual optimization method
NASA Astrophysics Data System (ADS)
Zheng, H.; Li, Y.; Chen, J. M.; Wang, T.; Huang, Q.; Huang, W. X.; Wang, L. H.; Li, S. M.; Yuan, W. P.; Zheng, X.; Zhang, S. P.; Chen, Z. Q.; Jiang, F.
2015-02-01
Ecological models are effective tools for simulating the distribution of global carbon sources and sinks. However, these models often suffer from substantial biases due to inaccurate simulations of complex ecological processes. We introduce a set of scaling factors (parameters) to an ecological model on the basis of plant functional type (PFT) and latitudes. A global carbon assimilation system (GCAS-DOM) is developed by employing a dual optimization method (DOM) to invert the time-dependent ecological model parameter state and the net carbon flux state simultaneously. We use GCAS-DOM to estimate the global distribution of the CO2 flux on 1° × 1° grid cells for the period from 2001 to 2007. Results show that land and ocean absorb -3.63 ± 0.50 and -1.82 ± 0.16 Pg C yr-1, respectively. North America, Europe and China contribute -0.98 ± 0.15, -0.42 ± 0.08 and -0.20 ± 0.29 Pg C yr-1, respectively. The uncertainties in the flux after optimization by GCAS-DOM have been remarkably reduced by more than 60%. Through parameter optimization, GCAS-DOM can provide improved estimates of the carbon flux for each PFT. Coniferous forest (-0.97 ± 0.27 Pg C yr-1) is the largest contributor to the global carbon sink. Fluxes of once-dominant deciduous forest generated by the Boreal Ecosystems Productivity Simulator (BEPS) are reduced to -0.78 ± 0.23 Pg C yr-1, the third largest carbon sink.
On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos
2015-01-01
For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, W.; Green, J.
2001-01-01
The purpose of this research was to determine the optimal configuration of home power systems relevant to different regions in the United States. The hypothesis was that, regardless of region, the optimal system would be a hybrid incorporating wind technology, versus a photovoltaic hybrid system without the use of wind technology. The method used in this research was HOMER, the Hybrid Optimization Model for Electric Renewables. HOMER is a computer program that optimizes electrical configurations under user-defined circumstances. According to HOMER, the optimal system for the four regions studied (Kansas, Massachusetts, Oregon, and Arizona) was a hybrid incorporating wind technology.more » The cost differences between these regions, however, were dependent upon regional renewable resources. Future studies will be necessary, as it is difficult to estimate meteorological impacts for other regions.« less
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Optimality, stochasticity, and variability in motor behavior
Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel
2008-01-01
Recent theories of motor control have proposed that the nervous system acts as a stochastically optimal controller, i.e. it plans and executes motor behaviors taking into account the nature and statistics of noise. Detrimental effects of noise are converted into a principled way of controlling movements. Attractive aspects of such theories are their ability to explain not only characteristic features of single motor acts, but also statistical properties of repeated actions. Here, we present a critical analysis of stochastic optimality in motor control which reveals several difficulties with this hypothesis. We show that stochastic control may not be necessary to explain the stochastic nature of motor behavior, and we propose an alternative framework, based on the action of a deterministic controller coupled with an optimal state estimator, which relieves drawbacks of stochastic optimality and appropriately explains movement variability. PMID:18202922
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Azad, Nasser L.
2015-06-01
The main proposition of the current investigation is to develop a computational intelligence-based framework which can be used for the real-time estimation of optimum battery state-of-charge (SOC) trajectory in plug-in hybrid electric vehicles (PHEVs). The estimated SOC trajectory can be then employed for an intelligent power management to significantly improve the fuel economy of the vehicle. The devised intelligent SOC trajectory builder takes advantage of the upcoming route information preview to achieve the lowest possible total cost of electricity and fossil fuel. To reduce the complexity of real-time optimization, the authors propose an immune system-based clustering approach which allows categorizing the route information into a predefined number of segments. The intelligent real-time optimizer is also inspired on the basis of interactions in biological immune systems, and is called artificial immune algorithm (AIA). The objective function of the optimizer is derived from a computationally efficient artificial neural network (ANN) which is trained by a database obtained from a high-fidelity model of the vehicle built in the Autonomie software. The simulation results demonstrate that the integration of immune inspired clustering tool, AIA and ANN, will result in a powerful framework which can generate a near global optimum SOC trajectory for the baseline vehicle, that is, the Toyota Prius PHEV. The outcomes of the current investigation prove that by taking advantage of intelligent approaches, it is possible to design a computationally efficient and powerful SOC trajectory builder for the intelligent power management of PHEVs.
Suzuki, Taiji; Aihara, Kazuyuki
2013-09-01
These days prostate cancer is one of the most common types of malignant neoplasm in men. Androgen ablation therapy (hormone therapy) has been shown to be effective for advanced prostate cancer. However, continuous hormone therapy often causes recurrence. This results from the progression of androgen-dependent cancer cells to androgen-independent cancer cells during the continuous hormone therapy. One possible method to prevent the progression to the androgen-independent state is intermittent androgen suppression (IAS) therapy, which ceases dosing intermittently. In this paper, we propose two methods to estimate the dynamics of prostate cancer, and investigate the IAS therapy from the viewpoint of optimality. The two methods that we propose for dynamics estimation are a variational Bayesian method for a piecewise affine (PWA) system and a Gaussian process regression method. We apply the proposed methods to real clinical data and compare their predictive performances. Then, using the estimated dynamics of prostate cancer, we observe how prostate cancer behaves for various dosing schedules. It can be seen that the conventional IAS therapy is a way of imposing high cost for dosing while keeping the prostate cancer in a safe state. We would like to dedicate this paper to the memory of Professor Luigi M. Ricciardi. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal Observations for Variational Data Assimilation
NASA Technical Reports Server (NTRS)
Koehl, Armin; Stammer, Detlef
2003-01-01
An important aspect of Ocean state estimation is the design of an observing system that allows the efficient study of climate aspects in the ocean. A solution of the design problem is presented here in terms of optimal observations that emerge as nondimensionalized singular vectors of the modified data resolution matrix. The actual computation is feasible only for scalar quantities in the limit of large observational errors. In the framework of a lo resolution North Atlantic primitive equation model it is demonstrated that such optimal observations when applied to determining the strength of the volume and heat transport across the Greenland-Scotland ridge, perform significantly better than traditional section data. On seasonal to inter-annual time-scales optimal observations are located primarily along the continental shelf and information about heat-transport, wind stress and stratification is being communicated via boundary waves and advective processes. On time-scales of about a month, sea surface height observations appear to be more efficient in reconstructing the cross-ridge heat transport than hydrographic observations. Optimal observations also provide a tool for understanding how the ocean state is effected by anomalies of integral quantities such as meridional heat transport.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile
2017-03-01
Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Qing, E-mail: qing.gao.chance@gmail.com; Dong, Daoyi, E-mail: daoyidong@gmail.com; Petersen, Ian R., E-mail: i.r.petersen@gmai.com
The purpose of this paper is to solve the fault tolerant filtering and fault detection problem for a class of open quantum systems driven by a continuous-mode bosonic input field in single photon states when the systems are subject to stochastic faults. Optimal estimates of both the system observables and the fault process are simultaneously calculated and characterized by a set of coupled recursive quantum stochastic differential equations.
Economic implications of current systems
NASA Technical Reports Server (NTRS)
Daniel, R. E.; Aster, R. W.
1983-01-01
The primary goals of this study are to estimate the value of R&D to photovoltaic (PV) metallization systems cost, and to provide a method for selecting an optimal metallization method for any given PV system. The value-added cost and relative electrical performance of 25 state-of-the-art (SOA) and advanced metallization system techniques are compared.
Finite-Dimensional Representations for Controlled Diffusions with Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federico, Salvatore, E-mail: salvatore.federico@unimi.it; Tankov, Peter, E-mail: tankov@math.univ-paris-diderot.fr
2015-02-15
We study stochastic delay differential equations (SDDE) where the coefficients depend on the moving averages of the state process. As a first contribution, we provide sufficient conditions under which the solution of the SDDE and a linear path functional of it admit a finite-dimensional Markovian representation. As a second contribution, we show how approximate finite-dimensional Markovian representations may be constructed when these conditions are not satisfied, and provide an estimate of the error corresponding to these approximations. These results are applied to optimal control and optimal stopping problems for stochastic systems with delay.
Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution.
García-Patrón, Raúl; Cerf, Nicolas J
2006-11-10
A fully general approach to the security analysis of continuous-variable quantum key distribution (CV-QKD) is presented. Provided that the quantum channel is estimated via the covariance matrix of the quadratures, Gaussian attacks are shown to be optimal against all collective eavesdropping strategies. The proof is made strikingly simple by combining a physical model of measurement, an entanglement-based description of CV-QKD, and a recent powerful result on the extremality of Gaussian states [M. M. Wolf, Phys. Rev. Lett. 96, 080502 (2006)10.1103/PhysRevLett.96.080502].
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Noise and disturbance of qubit measurements: An information-theoretic characterization
NASA Astrophysics Data System (ADS)
Abbott, Alastair A.; Branciard, Cyril
2016-12-01
Information-theoretic definitions for the noise associated with a quantum measurement and the corresponding disturbance to the state of the system have recently been introduced [F. Buscemi et al., Phys. Rev. Lett. 112, 050401 (2014), 10.1103/PhysRevLett.112.050401]. These definitions are invariant under relabeling of measurement outcomes, and lend themselves readily to the formulation of state-independent uncertainty relations both for the joint estimate of observables (noise-noise relations) and the noise-disturbance tradeoff. Here we derive such relations for incompatible qubit observables, which we prove to be tight in the case of joint estimates, and present progress towards fully characterizing the noise-disturbance tradeoff. In doing so, we show that the set of obtainable noise-noise values for such observables is convex, whereas the conjectured form for the set of obtainable noise-disturbance values is not. Furthermore, projective measurements are not optimal with respect to the joint-measurement noise or noise-disturbance tradeoffs. Interestingly, it seems that four-outcome measurements are needed in the former case, whereas three-outcome measurements are optimal in the latter.
Direct determination of uranium in seawater by laser fluorimetry.
Kumar, Sanjukta A; Shenoy, Niyoti S; Pandey, Shailaja; Sounderajan, Suvarna; Venkateswaran, G
2008-10-19
A method for estimation of uranium in seawater by using steady state laser flourimetry is described. Uranium present in seawater, in concentration of approximately 3 ng ml(-1) was estimated without prior separation of matrix. Quenching effect of major ions (Cl(-), Na(+), SO(4)(-), Mg(+), Ca(+), K(+), HCO(3)(-), Br(-)) present in seawater on fluorescence intensity of uranium was studied. The concentration of phosphoric acid required for maximum enhancement of fluorescence intensity was optimized and was found to be 5%. Similarly the volume of concentrated nitric acid required to eliminate the quenching effect of chloride and bromide completely from 5 ml of seawater were optimized and was found to be 3 ml. A simple equation was derived using steady state fluorescence correction method and was used for calculation of uranium concentration in seawater samples. The method has a precesion of 1% (1s, n=3). The values obtained from laser fluorimetry were validated by analyzing the same samples by linear sweep adsorptive stripping voltametry (LSASV) of the uranium-chloranilic acid (2,5-dichloro-3,6-dihydroxy-1,4-benzoquinone) complex. Both the values are well in agreement.
Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model
NASA Astrophysics Data System (ADS)
Wiezel, Oren; Or, Yizhar
2016-11-01
Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.
Modern Estimates of Global Water Cycle Fluxes
NASA Astrophysics Data System (ADS)
Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T. S.; Olson, W. S.
2014-12-01
The goal of the first phase of the NASA Energy and Water Cycle Study (NEWS) Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. Here we describe results of the water cycle assessment, including mean annual and monthly fluxes over continents and ocean basins during the first decade of the millennium. To the extent possible, the water flux estimates are based on (1) satellite measurements and (2) data-integrating models. A careful accounting of uncertainty in each flux was applied within a routine that enforced multiple water and energy budget constraints simultaneously in a variational framework, in order to produce objectively-determined, optimized estimates. Simultaneous closure of the water and energy budgets caused the ocean evaporation and precipitation terms to increase by about 10% and 5% relative to the original estimates, mainly because the energy budget required turbulent heat fluxes to be substantially larger in order to balance net radiation. In the majority of cases, the observed annual, surface and atmospheric water budgets over the continents and oceans close with much less than 10% residual. Observed residuals and optimized uncertainty estimates are considerably larger for monthly surface and atmospheric water budget closure, often nearing or exceeding 20% in North America, Eurasia, Australia and neighboring islands, and the Arctic and South Atlantic Oceans. The residuals in South America and Africa tend to be smaller, possibly because cold land processes are a non-issue. Fluxes are poorly observed over the Arctic Ocean, certain seas, Antarctica, and the Australasian and Indonesian Islands, leading to reliance on atmospheric analysis estimates. Other details of the study and future directions will be discussed.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
Vibration Control Using a State Observer that Considers Disturbances of a Golf Swing Robot
NASA Astrophysics Data System (ADS)
Hoshino, Yohei; Kobayashi, Yukinori; Yamada, Gen
In this paper, optimal control of a golf swing robot that is used to evaluate the performance of golf clubs is described. The robot has two joints, a rigid link and a flexible link that is a golf club. A mathematical model of the golf club is derived by Hamilton’s principle in consideration of bending and torsional stiffness and in consideration of eccentricity of the center of gravity of the club head on the shaft axis. A linear quadratic regulator (LQR) that considers the vibration of the club shaft is used to stop the robot during the follow-through. Since the robot moves fast and has strong non-linearity, an ordinary state observer for a linear system cannot accurately estimate the states of the system. A state observer that considers disturbances accurately estimates the state variables that cannot be measured. The results of numerical simulation are compared with experimental results obtained by using a swing robot.
NASA Astrophysics Data System (ADS)
Laverick, Kiarn T.; Wiseman, Howard M.; Dinani, Hossein T.; Berry, Dominic W.
2018-04-01
The problem of measuring a time-varying phase, even when the statistics of the variation is known, is considerably harder than that of measuring a constant phase. In particular, the usual bounds on accuracy, such as the 1 /(4 n ¯) standard quantum limit with coherent states, do not apply. Here, by restricting to coherent states, we are able to analytically obtain the achievable accuracy, the equivalent of the standard quantum limit, for a wide class of phase variation. In particular, we consider the case where the phase has Gaussian statistics and a power-law spectrum equal to κp -1/|ω| p for large ω , for some p >1 . For coherent states with mean photon flux N , we give the quantum Cramér-Rao bound on the mean-square phase error as [psin(π /p ) ] -1(4N /κ ) -(p -1 )/p . Next, we consider whether the bound can be achieved by an adaptive homodyne measurement in the limit N /κ ≫1 , which allows the photocurrent to be linearized. Applying the optimal filtering for the resultant linear Gaussian system, we find the same scaling with N , but with a prefactor larger by a factor of p . By contrast, if we employ optimal smoothing we can exactly obtain the quantum Cramér-Rao bound. That is, contrary to previously considered (p =2 ) cases of phase estimation, here the improvement offered by smoothing over filtering is not limited to a factor of 2 but rather can be unbounded by a factor of p . We also study numerically the performance of these estimators for an adaptive measurement in the limit where N /κ is not large and find a more complicated picture.
NASA Astrophysics Data System (ADS)
Wu, Hao; Nüske, Feliks; Paul, Fabian; Klus, Stefan; Koltai, Péter; Noé, Frank
2017-04-01
Markov state models (MSMs) and master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach conformation dynamics/molecular kinetics (VAC) and its special case the time-lagged independent component analysis (TICA), which allow us to approximate slow collective variables and molecular kinetics by linear combinations of smooth basis functions or order parameters. While it is known how to estimate MSMs from trajectories whose starting points are not sampled from an equilibrium ensemble, this has not yet been the case for TICA and the VAC. Previous estimates from short trajectories have been strongly biased and thus not variationally optimal. Here, we employ the Koopman operator theory and the ideas from dynamic mode decomposition to extend the VAC and TICA to non-equilibrium data. The main insight is that the VAC and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model can be used to compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provides estimates of relaxation time scales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA, and the linear VAC and allow molecular kinetics to be described without a cluster discretization.
Estimation Filter for Alignment of the Spitzer Space Telescope
NASA Technical Reports Server (NTRS)
Bayard, David
2007-01-01
A document presents a summary of an onboard estimation algorithm now being used to calibrate the alignment of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility). The algorithm, denoted the S2P calibration filter, recursively generates estimates of the alignment angles between a telescope reference frame and a star-tracker reference frame. At several discrete times during the day, the filter accepts, as input, attitude estimates from the star tracker and observations taken by the Pointing Control Reference Sensor (a sensor in the field of view of the telescope). The output of the filter is a calibrated quaternion that represents the best current mean-square estimate of the alignment angles between the telescope and the star tracker. The S2P calibration filter incorporates a Kalman filter that tracks six states - two for each of three orthogonal coordinate axes. Although, in principle, one state per axis is sufficient, the use of two states per axis makes it possible to model both short- and long-term behaviors. Specifically, the filter properly models transient learning, characteristic times and bounds of thermomechanical drift, and long-term steady-state statistics, whether calibration measurements are taken frequently or infrequently. These properties ensure that the S2P filter performance is optimal over a broad range of flight conditions, and can be confidently run autonomously over several years of in-flight operation without human intervention.
NASA Astrophysics Data System (ADS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-05-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-05-27
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness ( τ ), effective radius ( r eff ), and cloud-top height ( h ). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
Reyes, Elisabeth; Nadot, Sophie; von Balthazar, Maria; Schönenberger, Jürg; Sauquet, Hervé
2018-06-21
Ancestral state reconstruction is an important tool to study morphological evolution and often involves estimating transition rates among character states. However, various factors, including taxonomic scale and sampling density, may impact transition rate estimation and indirectly also the probability of the state at a given node. Here, we test the influence of rate heterogeneity using maximum likelihood methods on five binary perianth characters, optimized on a phylogenetic tree of angiosperms including 1230 species sampled from all families. We compare the states reconstructed by an equal-rate (Mk1) and a two-rate model (Mk2) fitted either with a single set of rates for the whole tree or as a partitioned model, allowing for different rates on five partitions of the tree. We find strong signal for rate heterogeneity among the five subdivisions for all five characters, but little overall impact of the choice of model on reconstructed ancestral states, which indicates that most of our inferred ancestral states are the same whether heterogeneity is accounted for or not.
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Optimal rotation sequences for active perception
NASA Astrophysics Data System (ADS)
Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin
2016-05-01
One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft
A global carbon assimilation system based on a dual optimization method
NASA Astrophysics Data System (ADS)
Zheng, H.; Li, Y.; Chen, J. M.; Wang, T.; Huang, Q.; Huang, W. X.; Li, S. M.; Yuan, W. P.; Zheng, X.; Zhang, S. P.; Chen, Z. Q.; Jiang, F.
2014-10-01
Ecological models are effective tools to simulate the distribution of global carbon sources and sinks. However, these models often suffer from substantial biases due to inaccurate simulations of complex ecological processes. We introduce a set of scaling factors (parameters) to an ecological model on the basis of plant functional type (PFT) and latitudes. A global carbon assimilation system (GCAS-DOM) is developed by employing a Dual Optimization Method (DOM) to invert the time-dependent ecological model parameter state and the net carbon flux state simultaneously. We use GCAS-DOM to estimate the global distribution of the CO2 flux on 1° ×1° grid cells for the period from 2000 to 2007. Results show that land and ocean absorb -3.69 ± 0.49 Pg C year-1 and -1.91 ± 0.16 Pg C year-1, respectively. North America, Europe and China contribut -0.96 ± 0.15 Pg C year-1, -0.42 ± 0.08 Pg C year-1 and -0.21 ± 0.28 Pg C year-1, respectively. The uncertainties in the flux after optimization by GCAS-DOM have been remarkably reduced by more than 60%. Through parameter optimization, GCAS-DOM can provide improved estimates of the carbon flux for each PFT. Coniferous forest (-0.97 ± 0.27 Pg C year-1) is the largest contributor to the global carbon sink. Fluxes of once-dominant deciduous forest generated by BEPS is reduced to -0.79 ± 0.22 Pg C year-1, being the third largest carbon sink.
NASA Astrophysics Data System (ADS)
Ubelmann, C.; Gerald, D.
2016-12-01
The SWOT data validation will be a first challenge after launch, as the nature of the measurement, in particular the two-dimensionality at short spatial scales, is new in altimetry. If the comparison with independent observations may be locally possible, a validation of the full signal and error spectrum will be challenging. However, some recent analyses in simulations have shown the possibility to separate the geophysical signals from the spatially coherent instrumental errors in the spectral space, through cross-spectral analysis. These results suggest that rapidly after launch, the instrument error canl be spectrally separated providing some validations and insights on the Ocean energy spectrum, as well as optimal calibrations. Beyond CalVal, such spectral computations will be also essential for producing high-level Ocean estimates (two and three dimensional Ocean state reconstructions).
A review on prognostics and health monitoring of Li-ion battery
NASA Astrophysics Data System (ADS)
Zhang, Jingliang; Lee, Jay
2011-08-01
The functionality and reliability of Li-ion batteries as major energy storage devices have received more and more attention from a wide spectrum of stakeholders, including federal/state policymakers, business leaders, technical researchers, environmental groups and the general public. Failures of Li-ion battery not only result in serious inconvenience and enormous replacement/repair costs, but also risk catastrophic consequences such as explosion due to overheating and short circuiting. In order to prevent severe failures from occurring, and to optimize Li-ion battery maintenance schedules, breakthroughs in prognostics and health monitoring of Li-ion batteries, with an emphasis on fault detection, correction and remaining-useful-life prediction, must be achieved. This paper reviews various aspects of recent research and developments in Li-ion battery prognostics and health monitoring, and summarizes the techniques, algorithms and models used for state-of-charge (SOC) estimation, current/voltage estimation, capacity estimation and remaining-useful-life (RUL) prediction.
2012-07-01
Beaches + Anastasia cu y d/ yr Table 1. Measured Ebb-delta Volume of St. Augustine Inlet, Florida at the 30ft contour (Legault et al. 2012...along Anastasia State Park). The permeability of the terminal groins (between 0% and 100%) were estimated based on visual inspection of sand...lft) Percent volume change for the three reaches and the ebb-tidal delta are shown in Figure 15 for all modeled alternatives. Anastasia State
Remote Sensing of Precipitation from Airborne and Spaceborne Radar. Chapter 13
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph
2017-01-01
Weather radar measurements from airborne or satellite platforms can be an effective remote sensing tool for examining the three-dimensional structures of clouds and precipitation. This chapter describes some fundamental properties of radar measurements and their dependence on the particle size distribution (PSD) and radar frequency. The inverse problem of solving for the vertical profile of PSD from a profile of measured reflectivity is stated as an optimal estimation problem for single- and multi-frequency measurements. Phenomena that can change the measured reflectivity Z(sub m) from its intrinsic value Z(sub e), namely attenuation, non-uniform beam filling, and multiple scattering, are described and mitigation of these effects in the context of the optimal estimation framework is discussed. Finally, some techniques involving the use of passive microwave measurements to further constrain the retrieval of the PSD are presented.
Systems identification and the adaptive management of waterfowl in the United States
Williams, B.K.; Nichols, J.D.
2001-01-01
Waterfowl management in the United States is one of the more visible conservation success stories in the United States. It is authorized and supported by appropriate legislative authorities, based on large-scale monitoring programs, and widely accepted by the public. The process is one of only a limited number of large-scale examples of effective collaboration between research and management, integrating scientific information with management in a coherent framework for regulatory decision-making. However, harvest management continues to face some serious technical problems, many of which focus on sequential identification of the resource system in a context of optimal decision-making. The objective of this paper is to provide a theoretical foundation of adaptive harvest management, the approach currently in use in the United States for regulatory decision-making. We lay out the legal and institutional framework for adaptive harvest management and provide a formal description of regulatory decision-making in terms of adaptive optimization. We discuss some technical and institutional challenges in applying adaptive harvest management and focus specifically on methods of estimating resource states for linear resource systems.
Quantum chi-squared and goodness of fit testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temme, Kristan; Verstraete, Frank
2015-01-15
A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less
Model-Based PAT for Quality Management in Pharmaceuticals Freeze-Drying: State of the Art
Fissore, Davide
2017-01-01
Model-based process analytical technologies can be used for the in-line control and optimization of a pharmaceuticals freeze-drying process, as well as for the off-line design of the process, i.e., the identification of the optimal operating conditions. This paper aims at presenting the state of the art in this field, focusing, particularly, on three groups of systems, namely, those based on the temperature measurement (i.e., the soft sensor), on the chamber pressure measurement (i.e., the systems based on the test of pressure rise and of pressure decrease), and on the sublimation flux estimate (i.e., the tunable diode laser absorption spectroscopy and the valveless monitoring system). The application of these systems for in-line process optimization (e.g., using a model predictive control algorithm) and to get a true quality by design (e.g., through the off-line calculation of the design space of the process) is presented and discussed. PMID:28224123
Sulis, Andrea; Buscarinu, Paola; Soru, Oriana; Sechi, Giovanni M
2014-04-22
The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996-2012) in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy) provides useful insights into the strengths and limitations of the proposed synthetic index.
The manufacture of steel by electric arc furnaces (EAF) is continuing to increase in usage in the United States with current production estimated to be over 63 million tons per year. The reduction of emissions from steel producers has been slow for two main reasons: the nee...
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Human Leg Model Predicts Muscle Forces, States, and Energetics during Walking.
Markowitz, Jared; Herr, Hugh
2016-05-01
Humans employ a high degree of redundancy in joint actuation, with different combinations of muscle and tendon action providing the same net joint torque. Both the resolution of these redundancies and the energetics of such systems depend on the dynamic properties of muscles and tendons, particularly their force-length relations. Current walking models that use stock parameters when simulating muscle-tendon dynamics tend to significantly overestimate metabolic consumption, perhaps because they do not adequately consider the role of elasticity. As an alternative, we posit that the muscle-tendon morphology of the human leg has evolved to maximize the metabolic efficiency of walking at self-selected speed. We use a data-driven approach to evaluate this hypothesis, utilizing kinematic, kinetic, electromyographic (EMG), and metabolic data taken from five participants walking at self-selected speed. The kinematic and kinetic data are used to estimate muscle-tendon lengths, muscle moment arms, and joint moments while the EMG data are used to estimate muscle activations. For each subject we perform an optimization using prescribed skeletal kinematics, varying the parameters that govern the force-length curve of each tendon as well as the strength and optimal fiber length of each muscle while seeking to simultaneously minimize metabolic cost and maximize agreement with the estimated joint moments. We find that the metabolic cost of transport (MCOT) values of our participants may be correctly matched (on average 0.36±0.02 predicted, 0.35±0.02 measured) with acceptable joint torque fidelity through application of a single constraint to the muscle metabolic budget. The associated optimal muscle-tendon parameter sets allow us to estimate the forces and states of individual muscles, resolving redundancies in joint actuation and lending insight into the potential roles and control objectives of the muscles of the leg throughout the gait cycle.
Optimal Bandwidth for Multitaper Spectrum Estimation
Haley, Charlotte L.; Anitescu, Mihai
2017-07-04
A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less
Dynamic Filtering Improves Attentional State Prediction with fNIRS
NASA Technical Reports Server (NTRS)
Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.
2016-01-01
Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).
Optimal structure of metaplasticity for adaptive learning
2017-01-01
Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247
NASA Astrophysics Data System (ADS)
Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li
2017-01-01
Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sellappan, R.
1977-01-01
The problem of optimal control with a minimum time criterion as applied to a single boom system for achieving two axis control is discussed. The special case where the initial conditions are such that the system can be driven to the equilibrium state with only a single switching maneuver in the bang-bang optimal sequence is analyzed. The system responses are presented. Application of the linear regulator problem for the optimal control of the telescoping system is extended to consider the effects of measurement and plant noises. The noise uncertainties are included with an application of the estimator - Kalman filter problem. Different schemes for measuring the components of the angular velocity are considered. Analytical results are obtained for special cases, and numerical results are presented for the general case.
Coyle, Doug; Ko, Yoo-Joung; Coyle, Kathryn; Saluja, Ronak; Shah, Keya; Lien, Kelly; Lam, Henry; Chan, Kelvin K W
2017-04-01
To assess the cost-effectiveness of gemcitabine (G), G + 5-fluorouracil, G + capecitabine, G + cisplatin, G + oxaliplatin, G + erlotinib, G + nab-paclitaxel (GnP), and FOLFIRINOX in the treatment of advanced pancreatic cancer from a Canadian public health payer's perspective, using data from a recently published Bayesian network meta-analysis. Analysis was conducted through a three-state Markov model and used data on the progression of disease with treatment from the gemcitabine arms of randomized controlled trials combined with estimates from the network meta-analysis for the newer regimens. Estimates of health care costs were obtained from local providers, and utilities were derived from the literature. The model estimates the effect of treatment regimens on costs and quality-adjusted life-years (QALYs) discounted at 5% per annum. At a willingness-to-pay (WTP) threshold of greater than $30,666 per QALY, FOLFIRINOX would be the most optimal regimen. For a WTP threshold of $50,000 per QALY, the probability that FOLFIRINOX would be optimal was 57.8%. There was no price reduction for nab-paclitaxel when GnP was optimal. From a Canadian public health payer's perspective at the present time and drug prices, FOLFIRINOX is the optimal regimen on the basis of the cost-effectiveness criterion. GnP is not cost-effective regardless of the WTP threshold. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn
2015-05-15
The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Milando, Chad W.; Martenies, Sheena E.; Batterman, Stuart A.
2017-01-01
In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest the potential for FRESH-EST to identify pollution control alternatives for air quality management planning. PMID:27318620
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking
NASA Astrophysics Data System (ADS)
Raihan A. V, Dilshad; Chakravorty, Suman
2018-03-01
Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less
Spectral Rate Theory for Two-State Kinetics
NASA Astrophysics Data System (ADS)
Prinz, Jan-Hendrik; Chodera, John D.; Noé, Frank
2014-02-01
Classical rate theories often fail in cases where the observable(s) or order parameter(s) used is a poor reaction coordinate or the observed signal is deteriorated by noise, such that no clear separation between reactants and products is possible. Here, we present a general spectral two-state rate theory for ergodic dynamical systems in thermal equilibrium that explicitly takes into account how the system is observed. The theory allows the systematic estimation errors made by standard rate theories to be understood and quantified. We also elucidate the connection of spectral rate theory with the popular Markov state modeling approach for molecular simulation studies. An optimal rate estimator is formulated that gives robust and unbiased results even for poor reaction coordinates and can be applied to both computer simulations and single-molecule experiments. No definition of a dividing surface is required. Another result of the theory is a model-free definition of the reaction coordinate quality. The reaction coordinate quality can be bounded from below by the directly computable observation quality, thus providing a measure allowing the reaction coordinate quality to be optimized by tuning the experimental setup. Additionally, the respective partial probability distributions can be obtained for the reactant and product states along the observed order parameter, even when these strongly overlap. The effects of both filtering (averaging) and uncorrelated noise are also examined. The approach is demonstrated on numerical examples and experimental single-molecule force-probe data of the p5ab RNA hairpin and the apo-myoglobin protein at low pH, focusing here on the case of two-state kinetics.
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
Statistical estimation via convex optimization for trending and performance monitoring
NASA Astrophysics Data System (ADS)
Samar, Sikandar
This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
MO-FG-BRA-08: Swarm Intelligence-Based Personalized Respiratory Gating in Lung SAbR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modiri, A; Sabouri, P; Sawant, A
Purpose: Respiratory gating is widely deployed as a clinical motion-management strategy in lung radiotherapy. In conventional gating, the beam is turned on during a pre-determined phase window; typically, around end-exhalation. In this work, we challenge the notion that end-exhalation is always the optimal gating phase. Specifically, we use a swarm-intelligence-based, inverse planning approach to determine the optimal respiratory phase and MU for each beam with respect to (i) the state of the anatomy at each phase and (ii) the time spent in that state, estimated from long-term monitoring of the patient’s breathing motion. Methods: In a retrospective study of fivemore » lung cancer patients, we compared the dosimetric performance of our proposed personalized gating (PG) with that of conventional end-of-exhale gating (CEG) and a previously-developed, fully 4D-optimized plan (combined with MLC tracking delivery). For each patient, respiratory phase probabilities (indicative of the time duration of the phase) were estimated over 2 minutes from lung tumor motion traces recorded previously using the Synchrony system (Accuray Inc.). Based on this information, inverse planning optimization was performed to calculate the optimal respiratory gating phase and MU for each beam. To ensure practical deliverability, each PG beam was constrained to deliver the assigned MU over a time duration comparable to that of CEG delivery. Results: Maximum OAR sparing for the five patients achieved by the PG and the 4D plans compared to CEG plans was: Esophagus Dmax [PG:57%, 4D:37%], Heart Dmax [PG:71%, 4D:87%], Spinal cord Dmax [PG:18%, 4D:68%] and Lung V13 [PG:16%, 4D:31%]. While patients spent the most time in exhalation, the PG-optimization chose end-exhale only for 28% of beams. Conclusion: Our novel gating strategy achieved significant dosimetric improvements over conventional gating, and approached the upper limit represented by fully 4D optimized planning while being significantly simpler and more clinically translatable. This work was partially supported through research funding from National Institutes of Health (R01CA169102) and Varian Medical Systems, Palo Alto, CA, USA.« less
McKee, Paul W.; Clark, Brian R.; Czarnecki, John B.
2004-01-01
Conjunctive-use optimization modeling was done to assist water managers and planners by estimating the maximum amount of ground water that hypothetically could be withdrawn from wells within the Sparta aquifer indefinitely without violating hydraulic-head or stream-discharge constraints. The Sparta aquifer is largely a confined aquifer of regional importance that comprises a sequence of unconsolidated sand units that are contained within the Sparta Sand. In 2000, more than 35.4 million cubic feet per day (Mft3/d) of water were withdrawn from the aquifer by more than 900 wells, primarily for industry, municipal supply, and crop irrigation in Arkansas. Continued, heavy withdrawals from the aquifer have caused several large cones of depression, lowering hydraulic heads below the top of the Sparta Sand in parts of Union and Columbia Counties and several areas in north-central Louisiana. Problems related to overdraft in the Sparta aquifer can result in increased drilling and pumping costs, reduced well yields, and degraded water quality in areas of large drawdown. A finite-difference ground-water flow model was developed for the Sparta aquifer using MODFLOW, primarily in eastern and southeastern Arkansas and north-central Louisiana. Observed aquifer conditions in 1997 supported by numerical simulations of ground-water flow show that continued pumping at withdrawal rates representative of 1990 - 1997 rates cannot be sustained indefinitely without causing hydraulic heads to drop substantially below the top of the Sparta Sand in southern Arkansas and north-central Louisiana. Areas of ground-water levels below the top of the Sparta Sand have been designated as Critical Ground-Water Areas by the State of Arkansas. A steady-state conjunctive-use optimization model was developed to simulate optimized surface-water and ground-water withdrawals while maintaining hydraulic-head and streamflow constraints, thus determining the 'sustainable yield' for the aquifer. Initial attempts to estimate sustainable yield using simulated 1997 hydraulic heads as initial heads in Scenario 1 and 100 percent of the baseline 1990-1997 withdrawal rate as the lower specified limit in Scenario 2 led to infeasible results. Sustainable yield was estimated successfully for scenario 3 with three variations on the upper limit of withdrawal rates. Additionally, ground-water withdrawals in Union County were fixed at 35.6 percent of the baseline 1990-1997 withdrawal rate in Scenario 3. These fixed withdrawals are recognized by the Arkansas Soil and Water Conservation Commission to be sustainable as determined in a previous study. The optimized solutions maintained hydraulic heads at or above the top of the Sparta Sand (except in the outcrop areas where unconfined conditions occur) and streamflow within the outcrop areas was maintained at or above minimum levels. Scenario 3 used limits of 100, 150, and 200 percent of baseline 1990-1997 withdrawal rates for the upper specified limit on 1,119 withdrawal decision variables (managed wells) resulting in estimated sustainable yields ranging from 11.6 to 13.2 Mft3/d in Arkansas and 0.3 to 0.5 Mft3/d in Louisiana. Assuming the total 2 Conjunctive-Use Optimization Model and Sustainable-Yield Estimation for the Sparta Aquifer of Southeastern Arkansas and North-Central Louisiana water demand is equal to the baseline 1990-1997 withdrawal rates, the sustainable yields estimated from the three scenarios only provide 52 to 59 percent of the total ground-water demand for Arkansas; the remainder is defined as unmet demand that could be obtained from large, sustainable surface-water withdrawals.
State estimation of spatio-temporal phenomena
NASA Astrophysics Data System (ADS)
Yu, Dan
This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.
A Markov decision process for managing habitat for Florida scrub-jays
Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken
2011-01-01
Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (<120 cm or "short"; 120-170 cm or "optimal"; .170 cm or "tall"; and a combination of tall and optimal or "mixed"), and our objective was to calculate a state-dependent management strategy that would maximize the long-term growth rate of the resident scrub-jay population. We used aerial imagery with multistate Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to help ensure the persistence of scrub-jays at Merritt Island National Wildlife Refuge. The development of a tailored monitoring program as a component of adaptive management could help reduce uncertainty about controlled and uncontrolled variation in transition probabilities of scrub-height and thus lead to improved decision making.
The NEWS Water Cycle Climatology
NASA Astrophysics Data System (ADS)
Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.
2012-12-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
The NEWS Water Cycle Climatology
NASA Technical Reports Server (NTRS)
Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson
2012-01-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
Consensus-based distributed cooperative learning from closed-loop neural control systems.
Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang
2015-02-01
In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Using Combustion Tracers to Estimate Surface Black Carbon Distributions in WRF-Chem
NASA Astrophysics Data System (ADS)
Raman, A.; Arellano, A. F.
2015-12-01
Black Carbon (BC) emissions significantly affect the global and regional climate, air quality, and human health. However, BC observations are currently limited in space and time; leading to considerable uncertainties in the estimates of BC distribution from regional and global models. Here, we investigate the usefulness of carbon monoxide (CO) in quantifying BC across continental United States (CONUS). We use high resolution EPA AQS observations of CO and IMPROVE BC to estimate BC/CO ratios. We model the BC and CO distribution using the community Weather Research and Forecasting model with Chemistry (WRF-Chem). We configured WRF-Chem using MOZART chemistry, NEI 2005, MEGAN, and FINNv1.5 for anthropogenic, biogenic and fire emissions, respectively. In this work, we address the following three key questions: 1) What are the discrepancies in the estimates of BC and CO distributions across CONUS during summer and winter periods?, 2) How do BC/CO ratios change for different spatial and temporal regimes?, 3) Can we get better estimates of BC from WRF-Chem if we use BC/CO ratios along with optimizing CO concentrations? We compare ratios derived from the model and observations and develop characteristic ratios for several geographical and temporal regimes. We use an independent set of measurements of BC and CO to evaluate these ratios. Finally, we use a Bayesian synthesis inversion to optimize CO from WRF-Chem using regionally tagged CO tracers. We multiply the characteristic ratios we derived with the optimized CO to obtain BC distributions. Our initial results suggest that the maximum ratios of BC versus CO occur in the western US during the summer (average: 4 ng/m3/ppbv) and in the southeast during the winter (average: 5 ng/m3/ppbv). However, we find that these relationships vary in space and time and are highly dependent on fuel usage and meteorology. We find that optimizing CO using EPA-AQS provides improvements in BC but only over areas where BC/CO ratios are close to observed values.Black Carbon (BC) emissions significantly affect the global and regional climate, air quality, and human health. However, BC observations are currently limited in space and time; leading to considerable uncertainties in the estimates of BC distribution from regional and global models. Here, we investigate the usefulness of carbon monoxide (CO) in quantifying BC across continental United States (CONUS). We use high resolution EPA AQS observations of CO and IMPROVE BC to estimate BC/CO ratios. We model the BC and CO distribution using the community Weather Research and Forecasting model with Chemistry (WRF-Chem). We configured WRF-Chem using MOZART chemistry, NEI 2005, MEGAN, and FINNv1.5 for anthropogenic, biogenic and fire emissions, respectively. In this work, we address the following three key questions: 1) What are the discrepancies in the estimates of BC and CO distributions across CONUS during summer and winter periods?, 2) How do BC/CO ratios change for different spatial and temporal regimes?, 3) Can we get better estimates of BC from WRF-Chem if we use BC/CO ratios along with optimizing CO concentrations? We compare ratios derived from the model and observations and develop characteristic ratios for several geographical and temporal regimes. We use an independent set of measurements of BC and CO to evaluate these ratios. Finally, we use a Bayesian synthesis inversion to optimize CO from WRF-Chem using regionally tagged CO tracers. We multiply the characteristic ratios we derived with the optimized CO to obtain BC distributions. Our initial results suggest that the maximum ratios of BC versus CO occur in the western US during the summer (average: 4 ng/m3/ppbv) and in the southeast during the winter (average: 5 ng/m3/ppbv). However, we find that these relationships vary in space and time and are highly dependent on fuel usage and meteorology. We find that optimizing CO using EPA-AQS provides improvements in BC but only over areas where BC/CO ratios are close to observed values.
Homodyning and heterodyning the quantum phase
NASA Technical Reports Server (NTRS)
Dariano, Giacomo M.; Macchiavello, C.; Paris, M. G. A.
1994-01-01
The double-homodyne and the heterodyne detection schemes for phase shifts between two synchronous modes of the electromagnetic field are analyzed in the framework of quantum estimation theory. The probability operator-valued measures (POM's) of the detectors are evaluated and compared with the ideal one in the limit of strong local reference oscillator. The present operational approach leads to a reasonable definition of phase measurement, whose sensitivity is actually related to the output r.m.s. noise of the photodetector. We emphasize that the simple-homodyne scheme does not correspond to a proper phase-shift measurements as it is just a zero-point detector. The sensitivity of all detection schemes are optimized at fixed energy with respect to the input state of radiation. It is shown that the optimal sensitivity can be actually achieved using suited squeezed states.
Single-particle trajectories reveal two-state diffusion-kinetics of hOGG1 proteins on DNA.
Vestergaard, Christian L; Blainey, Paul C; Flyvbjerg, Henrik
2018-03-16
We reanalyze trajectories of hOGG1 repair proteins diffusing on DNA. A previous analysis of these trajectories with the popular mean-squared-displacement approach revealed only simple diffusion. Here, a new optimal estimator of diffusion coefficients reveals two-state kinetics of the protein. A simple, solvable model, in which the protein randomly switches between a loosely bound, highly mobile state and a tightly bound, less mobile state is the simplest possible dynamic model consistent with the data. It yields accurate estimates of hOGG1's (i) diffusivity in each state, uncorrupted by experimental errors arising from shot noise, motion blur and thermal fluctuations of the DNA; (ii) rates of switching between states and (iii) rate of detachment from the DNA. The protein spends roughly equal time in each state. It detaches only from the loosely bound state, with a rate that depends on pH and the salt concentration in solution, while its rates for switching between states are insensitive to both. The diffusivity in the loosely bound state depends primarily on pH and is three to ten times higher than in the tightly bound state. We propose and discuss some new experiments that take full advantage of the new tools of analysis presented here.
Decay estimates of solutions to the bipolar non-isentropic compressible Euler-Maxwell system
NASA Astrophysics Data System (ADS)
Tan, Zhong; Wang, Yong; Tong, Leilei
2017-10-01
We consider the global existence and large time behavior of solutions near a constant equilibrium state to the bipolar non-isentropic compressible Euler-Maxwell system in {R}3 , where the background magnetic field could be non-zero. The global existence is established under the assumption that the H 3 norm of the initial data is small, but its higher order derivatives could be large. Combining the negative Sobolev (or Besov) estimates with the interpolation estimates, we prove the optimal time decay rates of the solution and its higher order spatial derivatives. In this sense, our results improve the similar ones in Wang et al (2012 SIAM J. Math. Anal. 44 3429-57).
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2018-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470
NASA Technical Reports Server (NTRS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
NASA Technical Reports Server (NTRS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
NASA Astrophysics Data System (ADS)
Aerts, Sven
2014-03-01
One of the problems facing any attempt to understand quantum theory is that the theory does not seem to offer an explanation of the way the probabilities arise. Moreover, it is a commonly held view that no such explanation is compatible with the mathematical structure of quantum theory, i.e. that the theory is inherently indeterministic, simply because nature is like that. We propose an abstract formalisation of the observation of a system in which the interaction between the system and the observer deterministically produces one of n possible outcomes. If the observer consistently manages to realize the outcome which maximizes the likelihood ratio that the outcome was inferred from the state of the system under study (and not from his own state), he will be called optimal. The probability for a repeated measurement on an ensemble of identical system states, is then derived as a measure over observer states. If the state of the system is a statistical mixture, the optimal observer produces an unbiased estimate of the components of the mixture. In case the state space is a complex Hilbert space, the resulting probability is equal to the one given by the Born rule. The proposal offers a concise interpretation for the meaning of the occurrence of a specific outcome as the unique outcome that, relative to the state of the system, is least dependent on the state of the observer. We note that a similar paradigm is used in the literature of perception to explain optical illusions in human visual perception. We argue that the result strengthens Helmholtz's view that all observation, is in fact a form a inference.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.
Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris
2018-03-01
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Study of dynamics of X-14B VTOL aircraft
NASA Technical Reports Server (NTRS)
Loscutoff, W. V.; Mitchiner, J. L.; Roesener, R. A.; Seevers, J. A.
1973-01-01
Research was initiated to investigate certain facets of modern control theory and their integration with a digital computer to provide a tractable flight control system for a VTOL aircraft. Since the hover mode is the most demanding phase in the operation of a VTOL aircraft, the research efforts were concentrated in this mode of aircraft operation. Research work on three different aspects of the operation of the X-14B VTOL aircraft is discussed. A general theory for optimal, prespecified, closed-loop control is developed. The ultimate goal was optimal decoupling of the modes of the VTOL aircraft to simplify the pilot's task of handling the aircraft. Modern control theory is used to design deterministic state estimators which provide state variables not measured directly, but which are needed for state variable feedback control. The effect of atmospheric turbulence on the X-14B is investigated. A maximum magnitude gust envelope within which the aircraft could operate stably with the available control power is determined.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Charge state distribution of 86Kr in hydrogen and helium gas charge strippers at 2.7 MeV /nucleon
NASA Astrophysics Data System (ADS)
Kuboki, H.; Okuno, H.; Hasebe, H.; Fukunishi, N.; Ikezawa, E.; Imao, H.; Kamigaito, O.; Kase, M.
2014-12-01
The charge state distributions of krypton (86Kr) with an energy of 2.7 MeV /nucleon were measured using hydrogen (H2 ) and helium (He) gas charge strippers. A differential pumping system was constructed to confine H2 and He gases to a thickness sufficient for the charge state distributions to attain equilibrium. The mean charge states of 86Kr in H2 and He gases attained equilibrium at 25.1 and 23.2, respectively, whereas the mean charge state in N2 gas at equilibrium was estimated to be less than 20. The charge distributions are successfully reproduced by the cross sections of ionization and electron capture processes optimized by a fitting procedure.
Extended analysis of the Trojan-horse attack in quantum key distribution
NASA Astrophysics Data System (ADS)
Vinay, Scott E.; Kok, Pieter
2018-04-01
The discrete-variable quantum key distribution protocols based on the 1984 protocol of Bennett and Brassard (BB84) are known to be secure against an eavesdropper, Eve, intercepting the flying qubits and performing any quantum operation on them. However, these protocols may still be vulnerable to side-channel attacks. We investigate the Trojan-horse side-channel attack where Eve sends her own state into Alice's apparatus and measures the reflected state to estimate the key. We prove that the separable coherent state is optimal for Eve among the class of multimode Gaussian attack states, even in the presence of thermal noise. We then provide a bound on the secret key rate in the case where Eve may use any separable state.
Two-mode bosonic quantum metrology with number fluctuations
NASA Astrophysics Data System (ADS)
De Pasquale, Antonella; Facchi, Paolo; Florio, Giuseppe; Giovannetti, Vittorio; Matsuoka, Koji; Yuasa, Kazuya
2015-10-01
We search for the optimal quantum pure states of identical bosonic particles for applications in quantum metrology, in particular, in the estimation of a single parameter for the generic two-mode interferometric setup. We consider the general case in which the total number of particles is fluctuating around an average N with variance Δ N2 . By recasting the problem in the framework of classical probability, we clarify the maximal accuracy attainable and show that it is always larger than the one reachable with a fixed number of particles (i.e., Δ N =0 ). In particular, for larger fluctuations, the error in the estimation diminishes proportionally to 1 /Δ N , below the Heisenberg-like scaling 1 /N . We also clarify the best input state, which is a quasi-NOON state for a generic setup and, for some special cases, a two-mode Schrödinger-cat state with a vacuum component. In addition, we search for the best state within the class of pure Gaussian states with a given average N , which is revealed to be a product state (with no entanglement) with a squeezed vacuum in one mode and the vacuum in the other.
Adaptive Management of Bull Trout Populations in the Lemhi Basin
Peterson, James T.; Tyre, Andrew J.; Converse, Sarah J.; Bogich, Tiffany L.; Miller, Damien; Post van der Burg, Max; Thomas, Carmen; Thompson, Ralph J.; Wood, Jeri; Brewer, Donna; Runge, Michael C.
2011-01-01
The bull trout Salvelinus confluentus, a stream-living salmonid distributed in drainages of the northwestern United States, is listed as threatened under the Endangered Species Act because of rangewide declines. One proposed recovery action is the reconnection of tributaries in the Lemhi Basin. Past water use policies in this core area disconnected headwater spawning sites from downstream habitat and have led to the loss of migratory life history forms. We developed an adaptive management framework to analyze which types of streams should be prioritized for reconnection under a proposed Habitat Conservation Plan. We developed a Stochastic Dynamic Program that identified optimal policies over time under four different assumptions about the nature of the migratory behavior and the effects of brook trout Salvelinus fontinalis on subpopulations of bull trout. In general, given the current state of the system and the uncertainties about the dynamics, the optimal policy would be to connect streams that are currently occupied by bull trout. We also estimated the value of information as the difference between absolute certainty about which of our four assumptions were correct, and a model averaged optimization assuming no knowledge. Overall there is little to be gained by learning about the dynamics of the system in its current state, although in other parts of the state space reducing uncertainties about the system would be very valuable. We also conducted a sensitivity analysis; the optimal decision at the current state does not change even when parameter values are changed up to 75% of the baseline values. Overall, the exercise demonstrates that it is possible to apply adaptive management principles to threatened and endangered species, but logistical and data availability constraints make detailed analyses difficult.
Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.
Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard
2017-12-12
We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Fuzzy control of battery chargers
NASA Astrophysics Data System (ADS)
Aldridge, Jack
1996-03-01
The increasing reliance on battery power for portable terrestrial purposes, such as portable tools, portable computers, and telecommunications, provides motivation to optimize the battery charging process with respect to speed of charging and charging cycle lifetime of the battery. Fuzzy control, implemented on a small microcomputer, optimizes charging in the presence of nonlinear effects and large uncertainty in the voltage vs. charge state characteristics for the battery. Use of a small microcontroller makes possible a small, capable, and affordable package for the charger. Microcontroller-based chargers provide improved performance by adjusting both charging voltage and charging current during the entire charging process depending on a current estimate of the state of charge of the battery. The estimate is derived from the zero-current voltage of the battery and the temperature and their rates of change. All of these quantities are uncertain due to the variation in condition between the individual cells in a battery, the rapid and nonlinear dependence of the fundamental electrochemistry on the internal temperature, and the placement of a single temperature sensor within the battery package. While monitoring the individual cell voltages and temperatures would be desirable, cost and complexity considerations preclude the practice. NASA has developed considerable technology in batteries for supplying significant amounts of power for spacecraft and in fuzzy control techniques for the space applications. In this paper, we describe how we are using both technologies to build an optimal charger prototype as a precursor to a commercial version.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.
Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.
Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A
2018-02-01
We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Dynamic filtering improves attentional state prediction with fNIRS
Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.
2016-01-01
Brain activity can predict a person’s level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise – thereby increasing such state prediction accuracy – remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% ± 6% versus 72% ± 15%). PMID:27231602
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
A Self-Tuning Kalman Filter for Autonomous Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Truong, Son H.
1998-01-01
Most navigation systems currently operated by NASA are ground-based, and require extensive support to produce accurate results. Recently developed systems that use Kalman Filter and Global Positioning System (GPS) data for orbit determination greatly reduce dependency on ground support, and have potential to provide significant economies for NASA spacecraft navigation. Current techniques of Kalman filtering, however, still rely on manual tuning from analysts, and cannot help in optimizing autonomy without compromising accuracy and performance. This paper presents an approach to produce a high accuracy autonomous navigation system fully integrated with the flight system. The resulting system performs real-time state estimation by using an Extended Kalman Filter (EKF) implemented with high-fidelity state dynamics model, as does the GPS Enhanced Orbit Determination Experiment (GEODE) system developed by the NASA Goddard Space Flight Center. Augmented to the EKF is a sophisticated neural-fuzzy system, which combines the explicit knowledge representation of fuzzy logic with the learning power of neural networks. The fuzzy-neural system performs most of the self-tuning capability and helps the navigation system recover from estimation errors. The core requirement is a method of state estimation that handles uncertainties robustly, capable of identifying estimation problems, flexible enough to make decisions and adjustments to recover from these problems, and compact enough to run on flight hardware. The resulting system can be extended to support geosynchronous spacecraft and high-eccentricity orbits. Mathematical methodology, systems and operations concepts, and implementation of a system prototype are presented in this paper. Results from the use of the prototype to evaluate optimal control algorithms implemented are discussed. Test data and major control issues (e.g., how to define specific roles for fuzzy logic to support the self-learning capability) are also discussed. In addition, architecture of a complete end-to-end candidate flight system that provides navigation with highly autonomous control using data from GPS is presented.
Reconstructing Mammalian Sleep Dynamics with Data Assimilation
Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.
2012-01-01
Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396
NASA Astrophysics Data System (ADS)
Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine
2015-12-01
A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
Challenges in Building Disease-Based National Health Accounts
Rosen, Allison B.; Cutler, David M.
2012-01-01
Background Measuring spending on diseases is critical to assessing the value of medical care. Objective To review the current state of cost of illness (COI) estimation methods, identifying their strengths, limitations and uses. We briefly describe the current National Health Expenditure Accounts (NHEA), and then go on to discuss the addition of COI estimation to the NHEA. Conclusion Recommendations are made for future research aimed at identifying the best methods for developing and using disease-based national health accounts to optimize the information available to policymakers as they struggle with difficult resource allocation decisions. PMID:19536017
Local neighborhood transition probability estimation and its use in contextual classification
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.
Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results
NASA Astrophysics Data System (ADS)
Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef
2017-04-01
The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.
Understanding the recent changes in the Southern Ocean carbon cycle: A multidisciplinary approach
NASA Astrophysics Data System (ADS)
Manizza, M.; Kahru, M.; Menemenlis, D.; Nevison, C. D.; Mitchell, B. G.; Keeling, R. F.
2016-12-01
The Southern Ocean represents a key area of the global ocean for the uptake of the CO2 originating from fossil fuels emissions. In these waters, cold temperatures combined with high rates of biological production drive the carbon uptake that accounts for about one-third of the global ocean uptake.Recent studies showed that changes in the Southern Annular Mode (SAM) index, mainly a proxy of the intensity of westerly winds, had a significant impact on the temporal variability of the CO2 uptake in the Southern Ocean. In order to shed light on this problem we propose to use both satellite-derived estimates of ocean productivity and carbon export in combinations of ocean physical and biogeochemical state estimates focusing on the 2006-2013 period. While the estimates of carbon fixation and export based on remote sensing will provide key information on the spatial and temporal variations of the biological carbon pump, the ocean state estimates will provide additional information on physical and carbon cycle processes, including the air-sea CO2 fluxes of the Southern Ocean in the 2006-2013 period where model solutions have been optimized.These physical estimates will be used to force an ocean biogeochemical model (ECCO2-Darwin) that will compute the CO2 uptake for each year. The physical model, forced with optimized atmospheric forcing, aims to realistically simulate interannual ocean climate variability that drives changes in both physical and biogeochemical processes ultimately impacting the carbon uptake of the Southern Ocean, and potentially responding to the SAM index variations.Although in this study great emphasis is given to the role of physical climate variations at driving the CO2 uptake of these polar waters, we will integrate model results with estimates from remote sensing techniques to better understand role of the biological carbon pump and its variability potentially responding to the SAM index changes.
NASA Astrophysics Data System (ADS)
Natraj, V.; Thompson, D. R.; Mathur, A. K.; Babu, K. N.; Kindel, B. C.; Massie, S. T.; Green, R. O.; Bhattacharya, B. K.
2017-12-01
Remote Visible / ShortWave InfraRed (VSWIR) spectroscopy, typified by the Next-Generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG), is a powerful tool to map the composition, health, and biodiversity of Earth's terrestrial and aquatic ecosystems. These studies must first estimate surface reflectance, removing the atmospheric effects of absorption and scattering by water vapor and aerosols. Since atmospheric state varies spatiotemporally, and is insufficiently constrained by climatological models, it is important to estimate it directly from the VSWIR data. However, water vapor and aerosol estimation is a significant ongoing challenge for existing atmospheric correction models. Conventional VSWIR atmospheric correction methods evolved from multi-band approaches and do not fully utilize the rich spectroscopic data available. We use spectrally resolved (line-by-line) radiative transfer calculations, coupled with optimal estimation theory, to demonstrate improved accuracy of surface retrievals. These spectroscopic techniques are already pervasive in atmospheric remote sounding disciplines but have not yet been applied to imaging spectroscopy. Our analysis employs a variety of scenes from the recent AVIRIS-NG India campaign, which spans various climes, elevation changes, a wide range of biomes and diverse aerosol scenarios. A key aspect of our approach is joint estimation of surface and aerosol parameters, which allows assessment of aerosol distortion effects using spectral shapes across the entire measured interval from 380-2500 nm. We expect that this method would outperform band ratio approaches, and enable evaluation of subtle aerosol parameters where in situ reference data is not available, or for extreme aerosol loadings, as is observed in the India scenarios. The results are validated using existing in-situ reference spectra, reflectance measurements from assigned partners in India, and objective spectral quality metrics for scenes without any ground reference data. We also quantify the true information content of VSWIR spectroscopy for improving retrieval efficiency. We anticipate that our work will significantly improve the state of the art for VSWIR atmospheric correction, reducing regional biases in global ecosystem studies. 2017. All rights reserved.
Einstein-Podolsky-Rosen steering: Its geometric quantification and witness
NASA Astrophysics Data System (ADS)
Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco
2018-02-01
We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
Economic optimization of natural hazard protection - conceptual study of existing approaches
NASA Astrophysics Data System (ADS)
Spackova, Olga; Straub, Daniel
2013-04-01
Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.
Operational load estimation of a smart wind turbine rotor blade
NASA Astrophysics Data System (ADS)
White, Jonathan R.; Adams, Douglas E.; Rumsey, Mark A.
2009-03-01
Rising energy prices and carbon emission standards are driving a fundamental shift from fossil fuels to alternative sources of energy such as biofuel, solar, wind, clean coal and nuclear. In 2008, the U.S. installed 8,358 MW of new wind capacity increasing the total installed wind power by 50% to 25,170 MW. A key technology to improve the efficiency of wind turbines is smart rotor blades that can monitor the physical loads being applied by the wind and then adapt the airfoil for increased energy capture. For extreme wind and gust events, the airfoil could be changed to reduce the loads to prevent excessive fatigue or catastrophic failure. Knowledge of the actual loading to the turbine is also useful for maintenance planning and design improvements. In this work, an array of uniaxial and triaxial accelerometers was integrally manufactured into a 9m smart rotor blade. DC type accelerometers were utilized in order to estimate the loading and deflection from both quasi-steady-state and dynamic events. A method is presented that designs an estimator of the rotor blade static deflection and loading and then optimizes the placement of the sensor(s). Example results show that the method can identify the optimal location for the sensor for both simple example cases and realistic complex loading. The optimal location of a single sensor shifts towards the tip as the curvature of the blade deflection increases with increasingly complex wind loading. The framework developed is practical for the expansion of sensor optimization in more complex blade models and for higher numbers of sensors.
Optimal experimental design for parameter estimation of a cell signaling model.
Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias
2009-11-01
Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.
NASA Astrophysics Data System (ADS)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-08-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.
Ambient noise levels in the continental United States
McNamara, D.E.; Buland, R.P.
2004-01-01
The results of our noise analysis are useful for characterizing the performance of existing broadband stations and for detecting operational problems and should be relevant to the future siting of ANSS backbone stations. The noise maps at body-wave frequencies should be useful for estimating the magnitude threshold for the ANSS backbone and regional networks or conversely for optimizing the distribution of regional network stations.
High-Level Connectionist Models
1993-10-01
subject of intense research since Axelrod (1984) showed that two agents engaged in a prisoner’s dilemma ( Poundstone , 1992) can evolve into mutually...The various parameter values for the program are set as described above unless otherwise noted. 4.1 Williams ’ Trigger Problem As an initial test...M. P. Vecchi. Optimization by simulated annealing. Sci- ence, 220:671-680, 1983. [39] R. J. Williams . Adaptive State Representation and Estimation
ERIC Educational Resources Information Center
Carrell, Scott E.; Sacerdote, Bruce I.; West, James E.
2011-01-01
We take cohorts of entering freshmen at the United States Air Force Academy and assign half to peer groups with the goal of maximizing the academic performance of the lowest ability students. Our assignment algorithm uses peer effects estimates from the observational data. We find a negative and significant treatment effect for the students we…
Amanda H. Lang; Shawn A. Baker; W. Dale Greene; Glen E. Murphy
2010-01-01
We compared value recovery of a modified treelength (MTL) logging system that measures product diameter and length using a Waratah 626 harvester head to that of a treelength (TL) system that estimates dimensions. A field test compared the actual value cut to the maximum potential value suggested by the log bucking optimization program Assessment of Value by Individual...
ODIN system technology module library, 1972 - 1973
NASA Technical Reports Server (NTRS)
Hague, D. S.; Watson, D. A.; Glatt, C. R.; Jones, R. T.; Galipeau, J.; Phoa, Y. T.; White, R. J.
1978-01-01
ODIN/RLV is a digital computing system for the synthesis and optimization of reusable launch vehicle preliminary designs. The system consists of a library of technology modules in the form of independent computer programs and an executive program, ODINEX, which operates on the technology modules. The technology module library contains programs for estimating all major military flight vehicle system characteristics, for example, geometry, aerodynamics, economics, propulsion, inertia and volumetric properties, trajectories and missions, steady state aeroelasticity and flutter, and stability and control. A general system optimization module, a computer graphics module, and a program precompiler are available as user aids in the ODIN/RLV program technology module library.
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
Lateral control system design for VTOL landing on a DD963 in high sea states. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bodson, M.
1982-01-01
The problem of designing lateral control systems for the safe landing of VTOL aircraft on small ships is addressed. A ship model is derived. The issues of estimation and prediction of ship motions are discussed, using optimal linear linear estimation techniques. The roll motion is the most important of the lateral motions, and it is found that it can be predicted for up to 10 seconds in perfect conditions. The automatic landing of the VTOL aircraft is considered, and a lateral controller, defined as a ship motion tracker, is designed, using optimal control techniqes. The tradeoffs between the tracking errors and the control authority are obtained. The important couplings between the lateral motions and controls are demonstrated, and it is shown that the adverse couplings between the sway and the roll motion at the landing pad are significant constraints in the tracking of the lateral ship motions. The robustness of the control system, including the optimal estimator, is studied, using the singular values analysis. Through a robustification procedure, a robust control system is obtained, and the usefulness of the singular values to define stability margins that take into account general types of unstructured modelling errors is demonstrated. The minimal destabilizing perturbations indicated by the singular values analysis are interpreted and related to the multivariable Nyquist diagrams.
Sulis, Andrea; Buscarinu, Paola; Soru, Oriana; Sechi, Giovanni M.
2014-01-01
The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996–2012) in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy) provides useful insights into the strengths and limitations of the proposed synthetic index. PMID:24759172
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
The Nature Index: a general framework for synthesizing knowledge on the state of biodiversity.
Certain, Grégoire; Skarpaas, Olav; Bjerke, Jarle-Werner; Framstad, Erik; Lindholm, Markus; Nilsen, Jan-Erik; Norderhaug, Ann; Oug, Eivind; Pedersen, Hans-Christian; Schartau, Ann-Kristin; van der Meeren, Gro I; Aslaksen, Iulie; Engen, Steinar; Garnåsjordet, Per-Arild; Kvaløy, Pål; Lillegård, Magnar; Yoccoz, Nigel G; Nybø, Signe
2011-04-22
The magnitude and urgency of the biodiversity crisis is widely recognized within scientific and political organizations. However, a lack of integrated measures for biodiversity has greatly constrained the national and international response to the biodiversity crisis. Thus, integrated biodiversity indexes will greatly facilitate information transfer from science toward other areas of human society. The Nature Index framework samples scientific information on biodiversity from a variety of sources, synthesizes this information, and then transmits it in a simplified form to environmental managers, policymakers, and the public. The Nature Index optimizes information use by incorporating expert judgment, monitoring-based estimates, and model-based estimates. The index relies on a network of scientific experts, each of whom is responsible for one or more biodiversity indicators. The resulting set of indicators is supposed to represent the best available knowledge on the state of biodiversity and ecosystems in any given area. The value of each indicator is scaled relative to a reference state, i.e., a predicted value assessed by each expert for a hypothetical undisturbed or sustainably managed ecosystem. Scaled indicator values can be aggregated or disaggregated over different axes representing spatiotemporal dimensions or thematic groups. A range of scaling models can be applied to allow for different ways of interpreting the reference states, e.g., optimal situations or minimum sustainable levels. Statistical testing for differences in space or time can be implemented using Monte-Carlo simulations. This study presents the Nature Index framework and details its implementation in Norway. The results suggest that the framework is a functional, efficient, and pragmatic approach for gathering and synthesizing scientific knowledge on the state of biodiversity in any marine or terrestrial ecosystem and has general applicability worldwide.
The Nature Index: A General Framework for Synthesizing Knowledge on the State of Biodiversity
Certain, Grégoire; Skarpaas, Olav; Bjerke, Jarle-Werner; Framstad, Erik; Lindholm, Markus; Nilsen, Jan-Erik; Norderhaug, Ann; Oug, Eivind; Pedersen, Hans-Christian; Schartau, Ann-Kristin; van der Meeren, Gro I.; Aslaksen, Iulie; Engen, Steinar; Garnåsjordet, Per-Arild; Kvaløy, Pål; Lillegård, Magnar; Yoccoz, Nigel G.; Nybø, Signe
2011-01-01
The magnitude and urgency of the biodiversity crisis is widely recognized within scientific and political organizations. However, a lack of integrated measures for biodiversity has greatly constrained the national and international response to the biodiversity crisis. Thus, integrated biodiversity indexes will greatly facilitate information transfer from science toward other areas of human society. The Nature Index framework samples scientific information on biodiversity from a variety of sources, synthesizes this information, and then transmits it in a simplified form to environmental managers, policymakers, and the public. The Nature Index optimizes information use by incorporating expert judgment, monitoring-based estimates, and model-based estimates. The index relies on a network of scientific experts, each of whom is responsible for one or more biodiversity indicators. The resulting set of indicators is supposed to represent the best available knowledge on the state of biodiversity and ecosystems in any given area. The value of each indicator is scaled relative to a reference state, i.e., a predicted value assessed by each expert for a hypothetical undisturbed or sustainably managed ecosystem. Scaled indicator values can be aggregated or disaggregated over different axes representing spatiotemporal dimensions or thematic groups. A range of scaling models can be applied to allow for different ways of interpreting the reference states, e.g., optimal situations or minimum sustainable levels. Statistical testing for differences in space or time can be implemented using Monte-Carlo simulations. This study presents the Nature Index framework and details its implementation in Norway. The results suggest that the framework is a functional, efficient, and pragmatic approach for gathering and synthesizing scientific knowledge on the state of biodiversity in any marine or terrestrial ecosystem and has general applicability worldwide. PMID:21526118
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Miranowicz, Adam
2017-03-01
We propose and experimentally implement an efficient procedure based on entanglement swapping to determine the Bell nonlocality measure of Horodecki et al. [Phys. Lett. A 200, 340 (1995), 10.1016/0375-9601(95)00214-N] and the fully entangled fraction of Bennett et al. [Phys. Rev. A 54, 3824 (1996), 10.1103/PhysRevA.54.3824] of an arbitrary two-qubit polarization-encoded state. The nonlocality measure corresponds to the amount of the violation of the Clauser-Horne-Shimony-Holt (CHSH) optimized over all measurement settings. By using simultaneously two copies of a given state, we measure directly only six parameters. This is an experimental determination of these quantities without quantum state tomography or continuous monitoring of all measurement bases in the usual CHSH inequality tests. We analyze how well the measured degrees of Bell nonlocality and other entanglement witnesses (including the fully entangled fraction and a nonlinear entropic witness) of an arbitrary two-qubit state can estimate its entanglement. In particular, we measure these witnesses and estimate the negativity of various two-qubit Werner states. Our approach could especially be useful for quantum communication protocols based on entanglement swapping.
NASA Astrophysics Data System (ADS)
Miclosina, C. O.; Balint, D. I.; Campian, C. V.; Frunzaverde, D.; Ion, I.
2012-11-01
This paper deals with the optimization of the axial hydraulic turbines of Kaplan type. The optimization of the runner blade is presented systematically from two points of view: hydrodynamic and constructive. Combining these aspects in order to gain a safer operation when unsteady effects occur in the runner of the turbine is attempted. The design and optimization of the runner blade is performed with QTurbo3D software developed at the Center for Research in Hydraulics, Automation and Thermal Processes (CCHAPT) from "Eftimie Murgu" University of Resita, Romania. QTurbo3D software offers possibilities to design the meridian channel of hydraulic turbines design the blades and optimize the runner blade. 3D modeling and motion analysis of the runner blade operating mechanism are accomplished using SolidWorks software. The purpose of motion study is to obtain forces, torques or stresses in the runner blade operating mechanism, necessary to estimate its lifetime. This paper clearly states the importance of combining the hydrodynamics with the structural design in the optimization procedure of the runner of hydraulic turbines.
NASA Astrophysics Data System (ADS)
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-10-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer-probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications.
Lin, Huan-Ting; Okumura, Takashi; Yatsuda, Yukinori; Ito, Satoru; Nakauchi, Hiromitsu; Otsu, Makoto
2016-01-01
Stable gene transfer into target cell populations via integrating viral vectors is widely used in stem cell gene therapy (SCGT). Accurate vector copy number (VCN) estimation has become increasingly important. However, existing methods of estimation such as real-time quantitative PCR are more restricted in practicality, especially during clinical trials, given the limited availability of sample materials from patients. This study demonstrates the application of an emerging technology called droplet digital PCR (ddPCR) in estimating VCN states in the context of SCGT. Induced pluripotent stem cells (iPSCs) derived from a patient with X-linked chronic granulomatous disease were used as clonable target cells for transduction with alpharetroviral vectors harboring codon-optimized CYBB cDNA. Precise primer–probe design followed by multiplex analysis conferred assay specificity. Accurate estimation of per-cell VCN values was possible without reliance on a reference standard curve. Sensitivity was high and the dynamic range of detection was wide. Assay reliability was validated by observation of consistent, reproducible, and distinct VCN clustering patterns for clones of transduced iPSCs with varying numbers of transgene copies. Taken together, use of ddPCR appears to offer a practical and robust approach to VCN estimation with a wide range of clinical and research applications. PMID:27763786
A data assimilating model for estimating Southern Ocean biogeochemistry
NASA Astrophysics Data System (ADS)
Verdy, A.; Mazloff, M. R.
2017-09-01
A Biogeochemical Southern Ocean State Estimate (B-SOSE) is introduced that includes carbon and oxygen fields as well as nutrient cycles. The state estimate is constrained with observations while maintaining closed budgets and obeying dynamical and thermodynamic balances. Observations from profiling floats, shipboard data, underway measurements, and satellites are used for assimilation. The years 2008-2012 are chosen due to the relative abundance of oxygen observations from Argo floats during this time. The skill of the state estimate at fitting the data is assessed. The agreement is best for fields that are constrained with the most observations, such as surface pCO2 in Drake Passage (44% of the variance captured) and oxygen profiles (over 60% of the variance captured at 200 and 1000 m). The validity of adjoint method optimization for coupled physical-biogeochemical state estimation is demonstrated with a series of gradient check experiments. The method is shown to be mature and ready to synthesize in situ biogeochemical observations as they become more available. Documenting the B-SOSE configuration and diagnosing the strengths and weaknesses of the solution informs usage of this product as both a climate baseline and as a way to test hypotheses. Transport of Intermediate Waters across 32°S supplies significant amounts of nitrate to the Atlantic Ocean (5.57 ± 2.94 Tmol yr-1) and Indian Ocean (5.09 ± 3.06 Tmol yr-1), but much less nitrate reaches the Pacific Ocean (1.78 ± 1.91 Tmol yr-1). Estimates of air-sea carbon dioxide fluxes south of 50°S suggest a mean uptake of 0.18 Pg C/yr for the time period analyzed.
New numerical methods for open-loop and feedback solutions to dynamic optimization problems
NASA Astrophysics Data System (ADS)
Ghosh, Pradipto
The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development is that the resulting control law has an algebraic closed-form structure. The proposed method uses an optimal spatial statistical predictor called universal kriging to construct the surrogate model of a feedback controller, which is capable of quickly predicting an optimal control estimate based on current state (and time) information. With universal kriging, an approximation to the optimal feedback map is computed by conceptualizing a set of state-control samples from pre-computed extremals to be a particular realization of a jointly Gaussian spatial process. Feedback policies are computed for a variety of example dynamic optimization problems in order to evaluate the effectiveness of this methodology. This feedback synthesis approach is found to combine good numerical accuracy with low computational overhead, making it a suitable candidate for real-time applications. Particle swarm and universal kriging are combined for a capstone example, a near optimal, near-admissible, full-state feedback control law is computed and tested for the heat-load-limited atmospheric-turn guidance of an aeroassisted transfer vehicle. The performance of this explicit guidance scheme is found to be very promising; initial errors in atmospheric entry due to simulated thruster misfirings are found to be accurately corrected while closely respecting the algebraic state-inequality constraint.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
Noise-induced escape in an excitable system
NASA Astrophysics Data System (ADS)
Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.
2013-03-01
We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.
Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1987-01-01
This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Superfast maximum-likelihood reconstruction for quantum tomography
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Entangling measurements for multiparameter estimation with two qubits
NASA Astrophysics Data System (ADS)
Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco
2018-01-01
Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Computational design of a Diels-Alderase from a thermophilic esterase: the importance of dynamics
NASA Astrophysics Data System (ADS)
Linder, Mats; Johansson, Adam Johannes; Olsson, Tjelvar S. G.; Liebeschuetz, John; Brinck, Tore
2012-09-01
A novel computational Diels-Alderase design, based on a relatively rare form of carboxylesterase from Geobacillus stearothermophilus, is presented and theoretically evaluated. The structure was found by mining the PDB for a suitable oxyanion hole-containing structure, followed by a combinatorial approach to find suitable substrates and rational mutations. Four lead designs were selected and thoroughly modeled to obtain realistic estimates of substrate binding and prearrangement. Molecular dynamics simulations and DFT calculations were used to optimize and estimate binding affinity and activation energies. A large quantum chemical model was used to capture the salient interactions in the crucial transition state (TS). Our quantitative estimation of kinetic parameters was validated against four experimentally characterized Diels-Alderases with good results. The final designs in this work are predicted to have rate enhancements of ≈103-106 and high predicted proficiencies. This work emphasizes the importance of considering protein dynamics in the design approach, and provides a quantitative estimate of the how the TS stabilization observed in most de novo and redesigned enzymes is decreased compared to a minimal, `ideal' model. The presented design is highly interesting for further optimization and applications since it is based on a thermophilic enzyme ( T opt = 70 °C).
Variational Lagrangian data assimilation in open channel networks
NASA Astrophysics Data System (ADS)
Wu, Qingfang; Tinka, Andrew; Weekly, Kevin; Beard, Jonathan; Bayen, Alexandre M.
2015-04-01
This article presents a data assimilation method in a tidal system, where data from both Lagrangian drifters and Eulerian flow sensors were fused to estimate water velocity. The system is modeled by first-order, hyperbolic partial differential equations subject to periodic forcing. The estimation problem can then be formulated as the minimization of the difference between the observed variables and model outputs, and eventually provide the velocity and water stage of the hydrodynamic system. The governing equations are linearized and discretized using an implicit discretization scheme, resulting in linear equality constraints in the optimization program. Thus, the flow estimation can be formed as an optimization problem and efficiently solved. The effectiveness of the proposed method was substantiated by a large-scale field experiment in the Sacramento-San Joaquin River Delta in California. A fleet of 100 sensors developed at the University of California, Berkeley, were deployed in Walnut Grove, CA, to collect a set of Lagrangian data, a time series of positions as the sensors moved through the water. Measurements were also taken from Eulerian sensors in the region, provided by the United States Geological Survey. It is shown that the proposed method can effectively integrate Lagrangian and Eulerian measurement data, resulting in a suited estimation of the flow variables within the hydraulic system.
Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.
Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less
Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells
Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...
2018-03-27
Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less
Building versatile bipartite probes for quantum metrology
NASA Astrophysics Data System (ADS)
Farace, Alessandro; De Pasquale, Antonella; Adesso, Gerardo; Giovannetti, Vittorio
2016-01-01
We consider bipartite systems as versatile probes for the estimation of transformations acting locally on one of the subsystems. We investigate what resources are required for the probes to offer a guaranteed level of metrological performance, when the latter is averaged over specific sets of local transformations. We quantify such a performance via the average skew information (AvSk), a convex quantity which we compute in closed form for bipartite states of arbitrary dimensions, and which is shown to be strongly dependent on the degree of local purity of the probes. Our analysis contrasts and complements the recent series of studies focused on the minimum, rather than the average, performance of bipartite probes in local estimation tasks, which was instead determined by quantum correlations other than entanglement. We provide explicit prescriptions to characterize the most reliable states maximizing the AvSk, and elucidate the role of state purity, separability and correlations in the classification of optimal probes. Our results can help in the identification of useful resources for sensing, estimation and discrimination applications when complete knowledge of the interaction mechanism realizing the local transformation is unavailable, and access to pure entangled probes is technologically limited.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.
Whiteley, Louise; Sahani, Maneesh
2008-03-06
Perception is an "inverse problem," in which the state of the world must be inferred from the sensory neural activity that results. However, this inference is both ill-posed (Helmholtz, 1856; Marr, 1982) and corrupted by noise (Green & Swets, 1989), requiring the brain to compute perceptual beliefs under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather than simply requiring that they use a modal estimate of the uncertain stimulus. Crucially, they approach optimal behavior even when denied the opportunity to learn adaptive decision strategies based on immediate feedback. Our data thus support the idea that flexible representations of uncertainty are pre-existing, widespread, and can be propagated to decision-making areas of the brain.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties
Xu, Yongjun; Hu, Yuan; Li, Guoquan
2018-01-01
Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315
Oyama, Katsunori; Sakatani, Kaoru
2016-01-01
Simultaneous monitoring of brain activity with near-infrared spectroscopy and electroencephalography allows spatiotemporal reconstruction of the hemodynamic response regarding the concentration changes in oxyhemoglobin and deoxyhemoglobin that are associated with recorded brain activity such as cognitive functions. However, the accuracy of state estimation during mental arithmetic tasks is often different depending on the length of the segment for sampling of NIRS and EEG signals. This study compared the results of a self-organizing map and ANOVA, which were both used to assess the accuracy of state estimation. We conducted an experiment with a mental arithmetic task performed by 10 participants. The lengths of the segment in each time frame for observation of NIRS and EEG signals were compared with the 30-s, 1-min, and 2-min segment lengths. The optimal segment lengths were different for NIRS and EEG signals in the case of classification of feature vectors into the states of performing a mental arithmetic task and being at rest.
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-07
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
Yang, Yongliang; Modares, Hamidreza; Wunsch, Donald C; Yin, Yixin
2018-06-01
This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian
2010-12-01
In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2015-06-01
Modern health management approaches for gas turbine engines (GTEs) aim to precisely estimate the health state of the GTE components to optimize maintenance decisions with respect to both economy and safety. In this research, we propose an advanced framework to identify the most likely degradation state of the turbine section in a GTE for prognostics and health management (PHM) applications. A novel nonlinear thermodynamic model is used to predict the performance parameters of the GTE given the measurements. The ratio between real efficiency of the GTE and simulated efficiency in the newly installed condition is defined as the health indicator and provided at each sequence. The symptom of nonrecoverable degradations in the turbine section, i.e. loss of turbine efficiency, is assumed to be the internal degradation state. A regularized auxiliary particle filter (RAPF) is developed to sequentially estimate the internal degradation state in nonuniform time sequences upon receiving sets of new measurements. The effectiveness of the technique is examined using the operating data over an entire time-between-overhaul cycle of a simple-cycle industrial GTE. The results clearly show the trend of degradation in the turbine section and the occasional fluctuations, which are well supported by the service history of the GTE. The research also suggests the efficacy of the proposed technique to monitor the health state of the turbine section of a GTE by implementing model-based PHM without the need for additional instrumentation.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Practical decoy state for quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Xiongfeng; Qi Bing; Zhao Yi
2005-07-15
Decoy states have recently been proposed as a useful method for substantially improving the performance of quantum key distribution (QKD). Here, we present a general theory of the decoy state protocol based on only two decoy states and one signal state. We perform optimization on the choice of intensities of the two decoy states and the signal state. Our result shows that a decoy state protocol with only two types of decoy states - the vacuum and a weak decoy state - asymptotically approaches the theoretical limit of the most general type of decoy state protocol (with an infinite numbermore » of decoy states). We also present a one-decoy-state protocol. Moreover, we provide estimations on the effects of statistical fluctuations and suggest that, even for long-distance (larger than 100 km) QKD, our two-decoy-state protocol can be implemented with only a few hours of experimental data. In conclusion, decoy state quantum key distribution is highly practical.« less
Optimal designs based on the maximum quasi-likelihood estimator
Shen, Gang; Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359
OTTER: An Optimized Transit Tool And Easy Reference
2016-03-01
as stated in the CNO’s Position Report: 2014. While a number of fuel -saving measures have been implemented in recent years, the effects of...saving measures have been implemented in recent years, the effects of operational transit speed on fuel consumption have not been adequately...their estimated savings As clearly seen, operating configuration has the most effect by far on fuel savings. Source: Fonte S (2009). In 2015
Optimal Sensor Scheduling for Multiple Hypothesis Testing
1981-09-01
Naval Research, under contract N00014-77-0532 is gratpfully acknowledged. 2 Laboratory for Information and Decision Systems , MIT Room 35-213, Cambridge...treat the more general problem [9,10]. However, two common threads connect these approaches: they obtain feedback laws mapping posterior destributions ...objective of a detection or identification algorithm is to produce correct estimates of the true state of a system . It is also bene- ficial if these
Generalized Distributed Consensus-based Algorithms for Uncertain Systems and Networks
2010-01-01
time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4):1165– 1191, 2005... time marko- vian jump linear systems , in the presence of delayed mode observations. Proceed- ings of the 2008 IEEE American Control Conference, pages...Markovian Jump Linear System state estimation . . . . 147 6 Conclusions 152 A Discrete- Time Coupled Matrix Equations 156 A.1 Properties of a special
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, Emerson
2018-01-01
One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
Physiome-model-based state-space framework for cardiac deformation recovery.
Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng
2007-11-01
To more reliably recover cardiac information from noise-corrupted, patient-specific measurements, it is essential to employ meaningful constraining models and adopt appropriate optimization criteria to couple the models with the measurements. Although biomechanical models have been extensively used for myocardial motion recovery with encouraging results, the passive nature of such constraints limits their ability to fully count for the deformation caused by active forces of the myocytes. To overcome such limitations, we propose to adopt a cardiac physiome model as the prior constraint for cardiac motion analysis. The cardiac physiome model comprises an electric wave propagation model, an electromechanical coupling model, and a biomechanical model, which are connected through a cardiac system dynamics for a more complete description of the macroscopic cardiac physiology. Embedded within a multiframe state-space framework, the uncertainties of the model and the patient's measurements are systematically dealt with to arrive at optimal cardiac kinematic estimates and possibly beyond. Experiments have been conducted to compare our proposed cardiac-physiome-model-based framework with the solely biomechanical model-based framework. The results show that our proposed framework recovers more accurate cardiac deformation from synthetic data and obtains more sensible estimates from real magnetic resonance image sequences. With the active components introduced by the cardiac physiome model, cardiac deformations recovered from patient's medical images are more physiologically plausible.
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
NASA Astrophysics Data System (ADS)
Cope, Robert Frank, III
1998-12-01
The electric utility industry in the United States is currently experiencing a new and different type of growing pain. It is the pain of having to restructure itself into a competitive business. Many industry experts are trying to explain how the nation as a whole, as well as individual states, will implement restructuring and handle its numerous "transition problems." One significant transition problem for federal and state regulators rests with determining a utility's stranded costs. Stranded generation facilities are assets which would be uneconomic in a competitive environment or costs for assets whose regulated book value is greater than market value. At issue is the methodology which will be used to estimate stranded costs. The two primary methods are known as "Top-Down" and "Bottom-Up." The "Top-Down" approach simply determines the present value of the losses in revenue as the market price for electricity changes over a period of time into the future. The problem with this approach is that it does not take into account technical issues associated with the generation and wheeling of electricity. The "Bottom-Up" approach computes the present value of specific strandable generation facilities and compares the resulting valuations with their historical costs. It is regarded as a detailed and difficult, but more precise, approach to identifying stranded assets and their associated costs. This dissertation develops a "Bottom-Up" quantitative, optimization-based approach to electric power wheeling within the state of Louisiana. It optimally evaluates all production capabilities and coordinates the movement of bulk power through transmission interconnections of competing companies in and around the state. Sensitivity analysis to this approach is performed by varying seasonal consumer demand, electric power imports, and transmission inter-connection cost parameters. Generation facility economic dispatch and transmission interconnection bulk power transfers, specific to each set of parameters, lead to the identification of stranded generation facilities. Stranded costs of non-dispatched and uneconomically dispatched generation facilities can then be estimated to indicate, arguably, the largest portion of restructuring transition costs as the industry is transformed from its present monopolistic structure to a competitive one.
Investing in breastfeeding - the world breastfeeding costing initiative.
Holla-Bhar, Radha; Iellamo, Alessandro; Gupta, Arun; Smith, Julie P; Dadhich, Jai Prakash
2015-01-01
Despite scientific evidence substantiating the importance of breastfeeding in child survival and development and its economic benefits, assessments show gaps in many countries' implementation of the 2003 WHO and UNICEF Global Strategy for Infant and Young Child Feeding (Global Strategy). Optimal breastfeeding is a particular example: initiation of breastfeeding within the first hour of birth, exclusive breastfeeding for the first six months; and continued breastfeeding for two years or more, together with safe, adequate, appropriate, responsive complementary feeding starting in the sixth month. While the understanding of "optimal" may vary among countries, there is a need for governments to facilitate an enabling environment for women to achieve optimal breastfeeding. Lack of financial resources for key programs is a major impediment, making economic perspectives important for implementation. Globally, while achieving optimal breastfeeding could prevent more than 800,000 under five deaths annually, in 2013, US$58 billion was spent on commercial baby food including milk formula. Support for improved breastfeeding is inadequately prioritized by policy and practice internationally. The World Breastfeeding Costing Initiative (WBCi) launched in 2013, attempts to determine the financial investment that is necessary to implement the Global Strategy, and to introduce a tool to estimate the costs for individual countries. The article presents detailed cost estimates for implementing the Global Strategy, and outlines the WBCi Financial Planning Tool. Estimates use demographic data from UNICEF's State of the World's Children 2013. The WBCi takes a programmatic approach to scaling up interventions, including policy and planning, health and nutrition care systems, community services and mother support, media promotion, maternity protection, WHO International Code of Marketing of Breastmilk Substitutes implementation, monitoring and research, for optimal breastfeeding practices. The financial cost of a program to implement the Global Strategy in 214 countries is estimated at US $17.5 billion ($130 per live birth). The major recurring cost is maternity entitlements. WBCi is a policy advocacy initiative to encourage integrated actions that enable breastfeeding. WBCi will help countries plan and prioritize actions and budget them accurately. International agencies and donors can also use the tool to calculate or track investments in breastfeeding.
Milando, Chad W; Martenies, Sheena E; Batterman, Stuart A
2016-09-01
In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest that FRESH-EST could be used to identify potentially more desirable pollution control alternatives in air quality management planning. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Shao-xiong; Zhang, Yang; Yu, Chang-shui
2018-03-01
Quantum Fisher information (QFI) is an important feature for the precision of quantum parameter estimation based on the quantum Cramér-Rao inequality. When the quantum state satisfies the von Neumann-Landau equation, the local quantum uncertainty (LQU), as a kind of quantum correlation, present in a bipartite mixed state guarantees a lower bound on QFI in the optimal phase estimation protocol (Girolami et al., 2013). However, in the open quantum systems, there is not an explicit relation between LQU and QFI generally. In this paper, we study the relation between LQU and QFI in open systems which is composed of two interacting two-level systems coupled to independent non-Markovian environments with the entangled initial state embedded by a phase parameter θ. The analytical calculations show that the QFI does not depend on the phase parameter θ, and its decay can be restrained through enhancing the coupling strength or non-Markovianity. Meanwhile, the LQU is related to the phase parameter θ and shows plentiful phenomena. In particular, we find that the LQU can well bound the QFI when the coupling between the two systems is switched off or the initial state is Bell state.
U. S. PHOSPHATE INDUSTRY: REVISED PROSPECTS AND POTENTIAL.
McKelvey, Vincent E.
1985-01-01
Although the United States is the world's largest producer and exporter of phosphates, serious doubts have arisen in recent years that U. S. deposits could sustain this important role. The development of borehole mining; i. e. , extracting the phosphate matrix as a slurry through a drill hole, however, is cause for optimism. Commercial borehole mining is still years away, but the potential advantages are numerous and important. Recent surveys also suggest that offshore deposits and deeply buried onshore deposits much exceed previous estimates. On the basis of the new technology and revised resource estimates, one can easily see the potential for increased production from U. S. deposits.
Development of a real-time transport performance optimization methodology
NASA Technical Reports Server (NTRS)
Gilyard, Glenn
1996-01-01
The practical application of real-time performance optimization is addressed (using a wide-body transport simulation) based on real-time measurements and calculation of incremental drag from forced response maneuvers. Various controller combinations can be envisioned although this study used symmetric outboard aileron and stabilizer. The approach is based on navigation instrumentation and other measurements found on state-of-the-art transports. This information is used to calculate winds and angle of attack. Thrust is estimated from a representative engine model as a function of measured variables. The lift and drag equations are then used to calculate lift and drag coefficients. An expression for drag coefficient, which is a function of parasite drag, induced drag, and aileron drag, is solved from forced excitation response data. Estimates of the parasite drag, curvature of the aileron drag variation, and minimum drag aileron position are produced. Minimum drag is then obtained by repositioning the symmetric aileron. Simulation results are also presented which evaluate the affects of measurement bias and resolution.
Pavanello, Michele; Adamowicz, Ludwik
2009-01-21
Accurate variational Born-Oppenheimer calculations of the 1 (1)A(1) ('), 2 (1)A(1) ('), 2 (3)A(1) ('), and 1 (1)E(') states of the H(3) (+) ion at the ground-state equilibrium geometry are reported. The wave functions of the states are expanded in terms of explicitly correlated spherical Gaussian functions with shifted centers. In the variational optimization the analytical gradient of the energy with respect to the nonlinear exponential parameters of the Gaussians has been employed. The energies obtained in the calculations are the best variational estimates ever calculated for the four states. One-electron densities for the states, as well as a D(3h)-restricted potential energy surface of the ground state calculated around the equilibrium geometry, are also presented and discussed.
Generalized Grover's Algorithm for Multiple Phase Inversion States
NASA Astrophysics Data System (ADS)
Byrnes, Tim; Forster, Gary; Tessler, Louis
2018-02-01
Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.
Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A
2013-06-27
The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.
Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long
2001-01-01
This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.
Spatial variability in water-balance model performance in the conterminous United States
Hay, L.E.; McCabe, G.J.
2002-01-01
A monthly water-balance (WB) model was tested in 44 river basins from diverse physiographic and climatic regions across the conterminous United States (U.S.). The WB model includes the concepts of climatic water supply and climatic water demand, seasonality in climatic water supply and demand, and soil-moisture storage. Exhaustive search techniques were employed to determine the optimal set of precipitation and temperature stations, and the optimal set of WB model parameters to use for each basin. It was found that the WB model worked best for basins with: (1) a mean elevation less than 450 meters or greater than 2000 meters, and/or (2) monthly runoff that is greater than 5 millimeters (mm) more than 80 percent of the time. In a separate analysis, a multiple linear regression (MLR) was computed using the adjusted R-square values obtained by comparing measured and estimated monthly runoff of the original 44 river basins as the dependent variable, and combinations of various independent variables [streamflow gauge latitude, longitude, and elevation; basin area, the long-term mean and standard deviation of annual precipitation; temperature and runoff; and low-flow statistics (i.e., the percentage of months with monthly runoff that is less than 5 mm)]. Results from the MLR study showed that the reliability of a WB model for application in a specific region can be estimated from mean basin elevation and the percentage of months with gauged runoff less than 5 mm. The MLR equations were subsequently used to estimate adjusted R-square values for 1,646 gauging stations across the conterminous U.S. Results of this study indicate that WB models can be used reliably to estimate monthly runoff in the eastern U.S., mountainous areas of the western U.S., and the Pacific Northwest. Applications of monthly WB models in the central U.S. can lead to uncertain estimates of runoff.
Olariu, Victor; Manesso, Erica; Peterson, Carsten
2017-06-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.
Comparative analysis for various redox flow batteries chemistries using a cost performance model
NASA Astrophysics Data System (ADS)
Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent
2015-10-01
The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2012-01-01
A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Olariu, Victor; Manesso, Erica
2017-01-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis–Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming. PMID:28680655
Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra
2018-06-11
This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Multiple Target Laser Designator (MTLD)
2007-03-01
Optimized Liquid Crystal Scanning Element Optimize the Nonimaging Predictive Algorithm for Target Ranging, Tracking, and Position Estimation...commercial potential. 3.0 PROGRESS THIS QUARTER 3.1 Optimization of Nonimaging Holographic Antenna for Target Tracking and Position Estimation (Task 6) In
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahowald, Natalie
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogenmore » balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in earth science with limited numbers of simulations; and, c) will be (as part of the proposed research) significantly improved both by adding asynchronous parallelism, early truncation of unsuccessful simulations, and the improvement of both serial and parallel performance by the use of derivative and sensitivity information from global and local surrogate approximations S(x). The algorithm development and testing will be focused on the CLM-ME/N model application, but the methods are general and are expected to also perform well on optimization for parameter estimation of other climate models and other classes of continuous multimodal optimization problems arising from complex simulation models. In addition, this proposal will compile available datasets of emissions of methane, nitrous oxides and reactive nitrogen species and develop protocols for site level comparisons with the CLM-ME/N. Once the model parameters are optimized against site level data, the model will be simulated at the global level and compared to atmospheric concentration measurements for the current climate, and future emissions will be estimated using climate change as simulated by the CESM. This proposal combines experts in earth system modeling, optimization, computer science, and process level understanding of soil gas emissions in an interdisciplinary team in order to improve the modeling of methane and nitrogen gas emissions. This proposal thus meets the requirements of the SciDAC RFP, by integrating state-of-the-art computer science and earth system to build an improved earth system model.« less
Optimal Designs for the Rasch Model
ERIC Educational Resources Information Center
Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer
2012-01-01
In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…
A practical globalization of one-shot optimization for optimal design of tokamak divertors
NASA Astrophysics Data System (ADS)
Blommaert, Maarten; Dekeyser, Wouter; Baelmans, Martine; Gauger, Nicolas R.; Reiter, Detlev
2017-01-01
In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes only state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; ...
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.
Zhang, JunQi; Wang, Cheng; Zhou, MengChu
2015-10-01
Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paszyńska, A.; Paszyński, M.; Jopek, K.
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Zhang, Zhijun; Zhu, Meihua; Ashraf, Muhammad; Broberg, Craig S; Sahn, David J; Song, Xubo
2014-12-01
Quantitative analysis of right ventricle (RV) motion is important for study of the mechanism of congenital and acquired diseases. Unlike left ventricle (LV), motion estimation of RV is more difficult because of its complex shape and thin myocardium. Although attempts of finite element models on MR images and speckle tracking on echocardiography have shown promising results on RV strain analysis, these methods can be improved since the temporal smoothness of the motion is not considered. The authors have proposed a temporally diffeomorphic motion estimation method in which a spatiotemporal transformation is estimated by optimization of a registration energy functional of the velocity field in their earlier work. The proposed motion estimation method is a fully automatic process for general image sequences. The authors apply the method by combining with a semiautomatic myocardium segmentation method to the RV strain analysis of three-dimensional (3D) echocardiographic sequences of five open-chest pigs under different steady states. The authors compare the peak two-point strains derived by their method with those estimated from the sonomicrometry, the results show that they have high correlation. The motion of the right ventricular free wall is studied by using segmental strains. The baseline sequence results show that the segmental strains in their methods are consistent with results obtained by other image modalities such as MRI. The image sequences of pacing steady states show that segments with the largest strain variation coincide with the pacing sites. The high correlation of the peak two-point strains of their method and sonomicrometry under different steady states demonstrates that their RV motion estimation has high accuracy. The closeness of the segmental strain of their method to those from MRI shows the feasibility of their method in the study of RV function by using 3D echocardiography. The strain analysis of the pacing steady states shows the potential utility of their method in study on RV diseases.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Problem of quantifying quantum correlations with non-commutative discord
NASA Astrophysics Data System (ADS)
Majtey, A. P.; Bussandri, D. G.; Osán, T. M.; Lamberti, P. W.; Valdés-Hernández, A.
2017-09-01
In this work we analyze a non-commutativity measure of quantum correlations recently proposed by Guo (Sci Rep 6:25241, 2016). By resorting to a systematic survey of a two-qubit system, we detected an undesirable behavior of such a measure related to its representation-dependence. In the case of pure states, this dependence manifests as a non-satisfactory entanglement measure whenever a representation other than the Schmidt's is used. In order to avoid this basis-dependence feature, we argue that a minimization procedure over the set of all possible representations of the quantum state is required. In the case of pure states, this minimization can be analytically performed and the optimal basis turns out to be that of Schmidt's. In addition, the resulting measure inherits the main properties of Guo's measure and, unlike the latter, it reduces to a legitimate entanglement measure in the case of pure states. Some examples involving general mixed states are also analyzed considering such an optimization. The results show that, in most cases of interest, the use of Guo's measure can result in an overestimation of quantum correlations. However, since Guo's measure has the advantage of being easily computable, it might be used as a qualitative estimator of the presence of quantum correlations.
Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space
NASA Astrophysics Data System (ADS)
Volkoff, T. J.; Whaley, K. B.
2014-12-01
We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Tool Wear Monitoring Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu
A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.
Health-aware Model Predictive Control of Pasteurization Plant
NASA Astrophysics Data System (ADS)
Karimi Pour, Fatemeh; Puig, Vicenç; Ocampo-Martinez, Carlos
2017-01-01
In order to optimize the trade-off between components life and energy consumption, the integration of a system health management and control modules is required. This paper proposes the integration of model predictive control (MPC) with a fatigue estimation approach that minimizes the damage of the components of a pasteurization plant. The fatigue estimation is assessed with the rainflow counting algorithm. Using data from this algorithm, a simplified model that characterizes the health of the system is developed and integrated with MPC. The MPC controller objective is modified by adding an extra criterion that takes into account the accumulated damage. But, a steady-state offset is created by adding this extra criterion. Finally, by including an integral action in the MPC controller, the steady-state error for regulation purpose is eliminated. The proposed control scheme is validated in simulation using a simulator of a utility-scale pasteurization plant.
Egea, Jose A; Henriques, David; Cokelaer, Thomas; Villaverde, Alejandro F; MacNamara, Aidan; Danciu, Diana-Patricia; Banga, Julio R; Saez-Rodriguez, Julio
2014-05-10
Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods.
2014-01-01
Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957
Dosage optimization in positron emission tomography: state-of-the-art methods and future prospects
Karakatsanis, Nicolas A; Fokou, Eleni; Tsoumpas, Charalampos
2015-01-01
Positron emission tomography (PET) is widely used nowadays for tumor staging and therapy response in the clinic. However, average PET radiation exposure has increased due to higher PET utilization. This study aims to review state-of-the-art PET tracer dosage optimization methods after accounting for the effects of human body attenuation and scan protocol parameters on the counting rate. In particular, the relationship between the noise equivalent count rate (NECR) and the dosage (NECR-dosage curve) for a range of clinical PET systems and body attenuation sizes will be systematically studied to prospectively estimate the minimum dosage required for sufficiently high NECR. The optimization criterion can be determined either as a function of the peak of the NECR-dosage curve or as a fixed NECR score when NECR uniformity across a patient population is important. In addition, the systematic NECR assessments within a controllable environment of realistic simulations and phantom experiments can lead to a NECR-dosage response model, capable of predicting the optimal dosage for every individual PET scan. Unlike conventional guidelines suggesting considerably large dosage levels for obese patients, NECR-based optimization recommends: i) moderate dosage to achieve 90% of peak NECR for obese patients, ii) considerable dosage reduction for slimmer patients such that uniform NECR is attained across the patient population, and iii) prolongation of scans for PET/MR protocols, where longer PET acquisitions are affordable due to lengthy MR sequences, with motion compensation becoming important then. Finally, the need for continuous adaptation of dosage optimization to emerging technologies will be discussed. PMID:26550543
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method
NASA Astrophysics Data System (ADS)
Kenderi, Gábor; Fidlin, Alexander
2014-12-01
The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Kirubarajan, T.
2017-05-01
Signal processing techniques are prevalent in a wide range of fields: control, target tracking, telecommunications, robotics, fault detection and diagnosis, and even stock market analysis, to name a few. Although first introduced in the 1950s, the most popular method used for signal processing and state estimation remains the Kalman filter (KF). The KF offers an optimal solution to the estimation problem under strict assumptions. Since this time, a number of other estimation strategies and filters were introduced to overcome robustness issues, such as the smooth variable structure filter (SVSF). In this paper, properties of the SVSF are explored in an effort to detect and diagnosis faults in an electromechanical system. The results are compared with the KF method, and future work is discussed.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Fragment Capture Simulation for MANPADS Test Arena Optimization
2014-03-27
length and weighing 13 to 25 kilograms (Bureau of Political- Military Affairs, 2011). Examples of MANPADS include the SA-7b and the SA-14. A MANPADS...terrorist activities. An estimated 500,000-750,000 MANPADS remain stockpiled worldwide, making them a persistent threat to both military and civilian...aircraft (Schroeder, 2007). According to the U.S. State Department’s Bureau of Political- Military Affairs, curbing the spread of MANPADS is a top priority