Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Critical transition in the constrained traveling salesman problem.
Andrecut, M; Ali, M K
2001-04-01
We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
NASA Astrophysics Data System (ADS)
Massioni, Paolo; Massari, Mauro
2018-05-01
This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Li, Peng; Huang, Chuanhe; Liu, Qin
2014-01-01
In vehicular ad hoc networks, roadside units (RSUs) placement has been proposed to improve the the overall network performance in many ITS applications. This paper addresses the budget constrained and delay-bounded placement problem (BCDP) for roadside units in vehicular ad hoc networks. There are two types of RSUs: cable connected RSU (c-RSU) and wireless RSU (w-RSU). c-RSUs are interconnected through wired lines, and they form the backbone of VANETs, while w-RSUs connect to other RSUs through wireless communication and serve as an economical extension of the coverage of c-RSUs. The delay-bounded coverage range and deployment cost of these two cases are totally different. We are given a budget constraint and a delay bound, the problem is how to find the optimal candidate sites with the maximal delay-bounded coverage to place RSUs such that a message from any c-RSU in the region can be disseminated to the more vehicles within the given budget constraint and delay bound. We first prove that the BCDP problem is NP-hard. Then we propose several algorithms to solve the BCDP problem. Simulation results show the heuristic algorithms can significantly improve the coverage range and reduce the total deployment cost, compared with other heuristic methods. PMID:25436656
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
Crystallization and preliminary X-ray analysis of membrane-bound pyrophosphatases.
Kellosalo, Juho; Kajander, Tommi; Honkanen, Riina; Goldman, Adrian
2013-02-01
Membrane-bound pyrophosphatases (M-PPases) are enzymes that enhance the survival of plants, protozoans and prokaryotes in energy constraining stress conditions. These proteins use pyrophosphate, a waste product of cellular metabolism, as an energy source for sodium or proton pumping. To study the structure and function of these enzymes we have crystallized two membrane-bound pyrophosphatases recombinantly produced in Saccharomyces cerevisae: the sodium pumping enzyme of Thermotoga maritima (TmPPase) and the proton pumping enzyme of Pyrobaculum aerophilum (PaPPase). Extensive crystal optimization has allowed us to grow crystals of TmPPase that diffract to a resolution of 2.6 Å. The decisive step in this optimization was in-column detergent exchange during the two-step purification procedure. Dodecyl maltoside was used for high temperature solubilization of TmPPase and then exchanged to a series of different detergents. After extensive screening, the new detergent, octyl glucose neopentyl glycol, was found to be the optimal for TmPPase but not PaPPase.
Constraining the braneworld with gravitational wave observations.
McWilliams, Sean T
2010-04-09
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm.
Constraining the Braneworld with Gravitational Wave Observations
NASA Technical Reports Server (NTRS)
McWilliams, Sean T.
2011-01-01
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, L, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining L via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain L at the approximately 1 micron level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of L less than or equal to 5 microns.
Conjecture Mapping to Optimize the Educational Design Research Process
ERIC Educational Resources Information Center
Wozniak, Helen
2015-01-01
While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…
Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.
Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei
2015-08-01
In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.
Adaptive Multi-Agent Systems for Constrained Optimization
NASA Technical Reports Server (NTRS)
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.
NASA Astrophysics Data System (ADS)
Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi
Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.
Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel; Chew, Peter A.
This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembledmore » the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.« less
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
NASA Astrophysics Data System (ADS)
Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid
2018-01-01
Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.
One- and two-objective approaches to an area-constrained habitat reserve site selection problem
Stephanie Snyder; Charles ReVelle; Robert Haight
2004-01-01
We compare several ways to model a habitat reserve site selection problem in which an upper bound on the total area of the selected sites is included. The models are cast as optimization coverage models drawn from the location science literature. Classic covering problems typically include a constraint on the number of sites that can be selected. If potential reserve...
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Optimal Coordinated EV Charging with Reactive Power Support in Constrained Distribution Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudyal, Sumit; Ceylan, Oğuzhan; Bhattarai, Bishnu P.
Electric vehicle (EV) charging/discharging can take place in any P-Q quadrants, which means EVs could support reactive power to the grid while charging the battery. In controlled charging schemes, distribution system operator (DSO) coordinates with the charging of EV fleets to ensure grid’s operating constraints are not violated. In fact, this refers to DSO setting upper bounds on power limits for EV charging. In this work, we demonstrate that if EVs inject reactive power into the grid while charging, DSO could issue higher upper bounds on the active power limits for the EVs for the same set of grid constraints.more » We demonstrate the concept in an 33-node test feeder with 1,500 EVs. Case studies show that in constrained distribution grids in coordinated charging, average costs of EV charging could be reduced if the charging takes place in the fourth P-Q quadrant compared to charging with unity power factor.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality
NASA Astrophysics Data System (ADS)
Cheung, KW; So, HC; Ma, W.-K.; Chan, YT
2006-12-01
The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
NASA Astrophysics Data System (ADS)
Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang
2018-04-01
A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Morel, M. R.; Chamis, C. C.
1991-01-01
A methodology is developed to tailor fabrication and material parameters of metal-matrix laminates for maximum loading capacity under thermomechanical loads. The stresses during the thermomechanical response are minimized subject to failure constrains and bounds on the laminate properties. The thermomechanical response of the laminate is simulated using nonlinear composite mechanics. Evaluations of the method on a graphite/copper symmetric cross-ply laminate were performed. The cross-ply laminate required different optimum fabrication procedures than a unidirectional composite. Also, the consideration of the thermomechanical cycle had a significant effect on the predicted optimal process.
Diffusion-limited mixing by incompressible flows
NASA Astrophysics Data System (ADS)
Miles, Christopher J.; Doering, Charles R.
2018-05-01
Incompressible flows can be effective mixers by appropriately advecting a passive tracer to produce small filamentation length scales. In addition, diffusion is generally perceived as beneficial to mixing due to its ability to homogenize a passive tracer. However we provide numerical evidence that, in cases where advection and diffusion are both actively present, diffusion may produce negative effects by limiting the mixing effectiveness of incompressible optimal flows. This limitation appears to be due to the presence of a limiting length scale given by a generalised Batchelor length (Batchelor 1959 J. Fluid Mech. 5 113–33). This length scale limitation may in turn affect long-term mixing rates. More specifically, we consider local-in-time flow optimisation under energy and enstrophy flow constraints with the objective of maximising the mixing rate. We observe that, for enstrophy-bounded optimal flows, the strength of diffusion may not impact the long-term mixing rate. For energy-constrained optimal flows, however, an increase in the strength of diffusion can decrease the mixing rate. We provide analytical lower bounds on mixing rates and length scales achievable under related constraints (point-wise bounded speed and rate-of-strain) by extending the work of Lin et al (2011 J. Fluid Mech. 675 465–76) and Poon (1996 Commun. PDE 21 521–39).
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher
2013-10-01
This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Finding viable models in SUSY parameter spaces with signal specific discovery potential
NASA Astrophysics Data System (ADS)
Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi
2013-08-01
Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.
Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
2016-09-01
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George
2017-08-15
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
Yan, Zheng; Wang, Jun
2014-03-01
This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali
2017-09-01
In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-01-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127
Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-26
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
NASA Astrophysics Data System (ADS)
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
Bounds on OPE coefficients from interference effects in the conformal collider
NASA Astrophysics Data System (ADS)
Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.
2017-11-01
We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.
NASA Astrophysics Data System (ADS)
Daneshian, Jahanbakhsh; Ramezani Dana, Leila; Sadler, Peter
2017-01-01
Benthic foraminifera species commonly outnumber planktic species in the type area of the Lower Miocene Qom Formation, in north central Iran, where it records the Tethyan link between the eastern Mediterranean and Indo- Pacific provinces. Because measured sections preserve very different sequences of first and last occurrences of these species, no single section provides a completely suitable baseline for correlation. To resolve this problem, we combined bioevents from three stratigraphic sections into a single composite sequence by constrained optimization (CONOP). The composite section arranges the first and last appearance events (FAD and LAD) of 242 foraminifera in an optimal order that minimizes the implied diachronism between sections. The composite stratigraphic ranges of the planktic foraminifera support a practical biozonation which reveals substantial local changes of accumulation rate during Aquitanian to Burdigalian times. Traditional biozone boundaries emerge little changed but an order of magnitude more correlations can be interpolated. The top of the section at Dobaradar is younger than previously thought and younger than sections at Dochah and Tigheh Reza-Abad. The latter two sections probably extend older into the Aquitanian than the Dobaradar section, but likely include a hiatus near the base of the Burdigalian. The bounding contacts with the Upper Red and Lower Red Formations are shown to be diachronous.
Constraining the noncommutative spectral action via astrophysical observations.
Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi
2010-09-03
The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.
NASA Astrophysics Data System (ADS)
Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.
2018-07-01
In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.
Density of convex intersections and applications
Rautenberg, C. N.; Rösel, S.
2017-01-01
In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301
Robust ADP Design for Continuous-Time Nonlinear Systems With Output Constraints.
Fan, Bo; Yang, Qinmin; Tang, Xiaoyu; Sun, Youxian
2018-06-01
In this paper, a novel robust adaptive dynamic programming (RADP)-based control strategy is presented for the optimal control of a class of output-constrained continuous-time unknown nonlinear systems. Our contribution includes a step forward beyond the usual optimal control result to show that the output of the plant is always within user-defined bounds. To achieve the new results, an error transformation technique is first established to generate an equivalent nonlinear system, whose asymptotic stability guarantees both the asymptotic stability and the satisfaction of the output restriction of the original system. Furthermore, RADP algorithms are developed to solve the transformed nonlinear optimal control problem with completely unknown dynamics as well as a robust design to guarantee the stability of the closed-loop systems in the presence of unavailable internal dynamic state. Via small-gain theorem, asymptotic stability of the original and transformed nonlinear system is theoretically guaranteed. Finally, comparison results demonstrate the merits of the proposed control policy.
Estimating the Inertia Matrix of a Spacecraft
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Keim, Jason; Shields, Joel
2007-01-01
A paper presents a method of utilizing some flight data, aboard a spacecraft that includes reaction wheels for attitude control, to estimate the inertia matrix of the spacecraft. The required data are digitized samples of (1) the spacecraft attitude in an inertial reference frame as measured, for example, by use of a star tracker and (2) speeds of rotation of the reaction wheels, the moments of inertia of which are deemed to be known. Starting from the classical equations for conservation of angular momentum of a rigid body, the inertia-matrix-estimation problem is formulated as a constrained least-squares minimization problem with explicit bounds on the inertia matrix incorporated as linear matrix inequalities. The explicit bounds reflect physical bounds on the inertia matrix and reduce the volume of data that must be processed to obtain a solution. The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently, with guaranteed convergence to the global optimum, by use of readily available algorithms. In a test case involving a model attitude platform rotating on an air bearing, it is shown that, relative to a prior method, the present method produces better estimates from few data.
On Efficient Deployment of Wireless Sensors for Coverage and Connectivity in Constrained 3D Space.
Wu, Chase Q; Wang, Li
2017-10-10
Sensor networks have been used in a rapidly increasing number of applications in many fields. This work generalizes a sensor deployment problem to place a minimum set of wireless sensors at candidate locations in constrained 3D space to k -cover a given set of target objects. By exhausting the combinations of discreteness/continuousness constraints on either sensor locations or target objects, we formulate four classes of sensor deployment problems in 3D space: deploy sensors at Discrete/Continuous Locations (D/CL) to cover Discrete/Continuous Targets (D/CT). We begin with the design of an approximate algorithm for DLDT and then reduce DLCT, CLDT, and CLCT to DLDT by discretizing continuous sensor locations or target objects into a set of divisions without sacrificing sensing precision. Furthermore, we consider a connected version of each problem where the deployed sensors must form a connected network, and design an approximation algorithm to minimize the number of deployed sensors with connectivity guarantee. For performance comparison, we design and implement an optimal solution and a genetic algorithm (GA)-based approach. Extensive simulation results show that the proposed deployment algorithms consistently outperform the GA-based heuristic and achieve a close-to-optimal performance in small-scale problem instances and a significantly superior overall performance than the theoretical upper bound.
Towards improving searches for optimal phylogenies.
Ford, Eric; St John, Katherine; Wheeler, Ward C
2015-01-01
Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space
2015-05-01
ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...
Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems
2010-02-01
Reachable Set . . . 88 6-1 LittleDog Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6-2 Dog bounding up stairs ...planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories over extremely...a motion planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories
Event-Triggered Adaptive Dynamic Programming for Continuous-Time Systems With Control Constraints.
Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo
2016-08-31
In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law. In addition, in the proposed method, the control signal is transmitted in an aperiodic manner to reduce the computational and the transmission cost. Both the networks are only updated at the trigger instants decided by the event-triggered condition. Detailed Lyapunov analysis is provided to guarantee that the closed-loop event-triggered system is ultimately bounded. Three case studies are used to demonstrate the effectiveness of the proposed method.
Energy-constrained two-way assisted private and quantum capacities of quantum channels
NASA Astrophysics Data System (ADS)
Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.
2018-06-01
With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.
NASA Astrophysics Data System (ADS)
Lauterbach, S.; Fina, M.; Wagner, W.
2018-04-01
Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.
Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L
2016-07-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
Constrained minimization problems for the reproduction number in meta-population models.
Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N
2018-02-14
The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015. https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017. https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.
Prediction uncertainty and optimal experimental design for learning dynamical systems.
Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Computation and analysis for a constrained entropy optimization problem in finance
NASA Astrophysics Data System (ADS)
He, Changhong; Coleman, Thomas F.; Li, Yuying
2008-12-01
In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.
Savin, Cristina; Dayan, Peter; Lengyel, Máté
2014-01-01
A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments. PMID:24586137
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-10-01
We extend the recent bounds of Sason and Verdú relating Rényi entropy and Bayesian hypothesis testing (arXiv:1701.01974.) to the quantum domain and show that they have a number of different applications. First, we obtain a sharper bound relating the optimal probability of correctly distinguishing elements of an ensemble of states to that of the pretty good measurement, and an analogous bound for optimal and pretty good entanglement recovery. Second, we obtain bounds relating optimal guessing and entanglement recovery to the fidelity of the state with a product state, which then leads to tight tripartite uncertainty and monogamy relations.
Exact Fundamental Limits of the First and Second Hyperpolarizabilities
NASA Astrophysics Data System (ADS)
Lytel, Rick; Mossman, Sean; Crowell, Ethan; Kuzyk, Mark G.
2017-08-01
Nonlinear optical interactions of light with materials originate in the microscopic response of the molecular constituents to excitation by an optical field, and are expressed by the first (β ) and second (γ ) hyperpolarizabilities. Upper bounds to these quantities were derived seventeen years ago using approximate, truncated state models that violated completeness and unitarity, and far exceed those achieved by potential optimization of analytical systems. This Letter determines the fundamental limits of the first and second hyperpolarizability tensors using Monte Carlo sampling of energy spectra and transition moments constrained by the diagonal Thomas-Reiche-Kuhn (TRK) sum rules and filtered by the off-diagonal TRK sum rules. The upper bounds of β and γ are determined from these quantities by applying error-refined extrapolation to perfect compliance with the sum rules. The method yields the largest diagonal component of the hyperpolarizabilities for an arbitrary number of interacting electrons in any number of dimensions. The new method provides design insight to the synthetic chemist and nanophysicist for approaching the limits. This analysis also reveals that the special cases which lead to divergent nonlinearities in the many-state catastrophe are not physically realizable.
NASA Astrophysics Data System (ADS)
Burrage, Clare; Sakstein, Jeremy
2018-03-01
Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Plessow, Philipp N
2018-02-13
This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.
Consideration of plant behaviour in optimal servo-compensator design
NASA Astrophysics Data System (ADS)
Moase, W. H.; Manzie, C.
2016-07-01
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.
NASA Astrophysics Data System (ADS)
Jana, Soumya; Chakravarty, Girish Kumar; Mohanty, Subhendra
2018-04-01
The observations of gravitational waves from the binary neutron star merger event GW170817 and the subsequent observation of its electromagnetic counterparts from the gamma-ray burst GRB 170817A provide us a significant opportunity to study theories of gravity beyond general relativity. An important outcome of these observations is that they constrain the difference between the speed of gravity and the speed of light to less than 10-15c . Also, the time delay between the arrivals of gravitational waves at different detectors constrains the speed of gravity at the Earth to be in the range 0.55 c
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
Publications | Grid Modernization | NREL
Photovoltaics: Trajectories and Challenges Cover of Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow publication Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow
Pump-dump iterative squeezing of vibrational wave packets.
Chang, Bo Y; Sola, Ignacio R
2005-12-22
The free motion of a nonstationary vibrational wave packet in an electronic potential is a source of interesting quantum properties. In this work we propose an iterative scheme that allows continuous stretching and squeezing of a wave packet in the ground or in an excited electronic state, by switching the wave function between both potentials with pi pulses at certain times. Using a simple model of displaced harmonic oscillators and delta pulses, we derive the analytical solution and the conditions for its possible implementation and optimization in different molecules and electronic states. We show that the main constraining parameter is the pulse bandwidth. Although in principle the degree of squeezing (or stretching) is not bounded, the physical resources increase quadratically with the number of iterations, while the achieved squeezing only increases linearly.
Constrained spacecraft reorientation using mixed integer convex programming
NASA Astrophysics Data System (ADS)
Tam, Margaret; Glenn Lightsey, E.
2016-10-01
A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.
Luo, Biao; Liu, Derong; Wu, Huai-Ning
2018-06-01
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin
2013-08-12
In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix) to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Effective Teaching of Economics: A Constrained Optimization Problem?
ERIC Educational Resources Information Center
Hultberg, Patrik T.; Calonge, David Santandreu
2017-01-01
One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zheng, L.
2016-12-01
Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.
Adaptive, Distributed Control of Constrained Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.
Kok, H P; de Greef, M; Bel, A; Crezee, J
2009-08-01
In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal
ERIC Educational Resources Information Center
Steinley, Douglas; Hubert, Lawrence
2008-01-01
This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test
NASA Astrophysics Data System (ADS)
Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng
2017-04-01
Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.
The Mystery of Io's Warm Polar Regions: Implications for Heat Flow
NASA Technical Reports Server (NTRS)
Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.
2002-01-01
Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.
Quantum mechanics of a constrained particle
NASA Astrophysics Data System (ADS)
da Costa, R. C. T.
1981-04-01
The motion of a particle rigidly bounded to a surface is discussed, considering the Schrödinger equation of a free particle constrained to move, by the action of an external potential, in an infinitely thin sheet of the ordinary three-dimensional space. Contrary to what seems to be the general belief expressed in the literature, this limiting process gives a perfectly well-defined result, provided that we take some simple precautions in the definition of the potentials and wave functions. It can then be shown that the wave function splits into two parts: the normal part, which contains the infinite energies required by the uncertainty principle, and a tangent part which contains "surface potentials" depending both on the Gaussian and mean curvatures. An immediate consequence of these results is the existence of different quantum mechanical properties for two isometric surfaces, as can be seen from the bound state which appears along the edge of a folded (but not stretched) plane. The fact that this surface potential is not a bending invariant (cannot be expressed as a function of the components of the metric tensor and their derivatives) is also interesting from the more general point of view of the quantum mechanics in curved spaces, since it can never be obtained from the classical Lagrangian of an a priori constrained particle without substantial modifications in the usual quantization procedures. Similar calculations are also presented for the case of a particle bounded to a curve. The properties of the constraining spatial potential, necessary to a meaningful limiting process, are discussed in some detail, and, as expected, the resulting Schrödinger equation contains a "linear potential" which is a function of the curvature.
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)
2012-01-01
This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.
Constrained growth flips the direction of optimal phenological responses among annual plants.
Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas
2016-03-01
Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization
2017-06-01
solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end
HHAI methyltransferase (blue ribbon) bound to oligonucleotide (strands with bonds colored yellow and green) containing a pseudorotationally constrained sugar analogue at the target position (orange bonds with cyan atoms). The south-constrained pseudosugar is rotated about its flanking phosphodiester bonds, 90° from its initial position in B-form DNA, but short of a completely
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Probing Models of Dark Matter and the Early Universe
NASA Astrophysics Data System (ADS)
Orlofsky, Nicholas David
This thesis discusses models for dark matter (DM) and their behavior in the early universe. An important question is how phenomenological probes can directly search for signals of DM today. Another topic of investigation is how the DM and other processes in the early universe must evolve. Then, astrophysical bounds on early universe dynamics can constrain DM. We will consider these questions in the context of three classes of DM models--weakly interacting massive particles (WIMPs), axions, and primordial black holes (PBHs). Starting with WIMPs, we consider models where the DM is charged under the electroweak gauge group of the Standard Model. Such WIMPs, if generated by a thermal cosmological history, are constrained by direct detection experiments. To avoid present or near-future bounds, the WIMP model or cosmological history must be altered in some way. This may be accomplished by the inclusion of new states that coannihilate with the WIMP or a period of non-thermal evolution in the early universe. Future experiments are likely to probe some of these altered scenarios, and a non-observation would require a high degree of tuning in some of the model parameters in these scenarios. Next, axions, as light pseudo-Nambu-Goldstone bosons, are susceptible to quantum fluctuations in the early universe that lead to isocurvature perturbations, which are constrained by observations of the cosmic microwave background (CMB). We ask what it would take to allow axion models in the face of these strong CMB bounds. We revisit models where inflationary dynamics modify the axion potential and discuss how isocurvature bounds can be relaxed, elucidating the difficulties in these constructions. Avoiding disruption of inflationary dynamics provides important limits on the parameter space. Finally, PBHs have received interest in part due to observations by LIGO of merging black hole binaries. We ask how these PBHs could arise through inflationary models and investigate the opportunity for corroboration through experimental probes of gravitational waves at pulsar timing arrays. We provide examples of theories that are already ruled out, theories that will soon be probed, and theories that will not be tested in the foreseeable future. The models that are most strongly constrained are those with relatively broad primordial power spectra.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
NASA Astrophysics Data System (ADS)
Yu, Nam Yul
2017-12-01
The principle of compressed sensing (CS) can be applied in a cryptosystem by providing the notion of security. In this paper, we study the computational security of a CS-based cryptosystem that encrypts a plaintext with a partial unitary sensing matrix embedding a secret keystream. The keystream is obtained by a keystream generator of stream ciphers, where the initial seed becomes the secret key of the CS-based cryptosystem. For security analysis, the total variation distance, bounded by the relative entropy and the Hellinger distance, is examined as a security measure for the indistinguishability. By developing upper bounds on the distance measures, we show that the CS-based cryptosystem can be computationally secure in terms of the indistinguishability, as long as the keystream length for each encryption is sufficiently large with low compression and sparsity ratios. In addition, we consider a potential chosen plaintext attack (CPA) from an adversary, which attempts to recover the key of the CS-based cryptosystem. Associated with the key recovery attack, we show that the computational security of our CS-based cryptosystem is brought by the mathematical intractability of a constrained integer least-squares (ILS) problem. For a sub-optimal, but feasible key recovery attack, we consider a successive approximate maximum-likelihood detection (SAMD) and investigate the performance by developing an upper bound on the success probability. Through theoretical and numerical analyses, we demonstrate that our CS-based cryptosystem can be secure against the key recovery attack through the SAMD.
Beyond Positivity Bounds and the Fate of Massive Gravity
NASA Astrophysics Data System (ADS)
Bellazzini, Brando; Riva, Francesco; Serra, Javi; Sgarlata, Francesco
2018-04-01
We constrain effective field theories by going beyond the familiar positivity bounds that follow from unitarity, analyticity, and crossing symmetry of the scattering amplitudes. As interesting examples, we discuss the implications of the bounds for the Galileon and ghost-free massive gravity. The combination of our theoretical bounds with the experimental constraints on the graviton mass implies that the latter is either ruled out or unable to describe gravitational phenomena, let alone to consistently implement the Vainshtein mechanism, down to the relevant scales of fifth-force experiments, where general relativity has been successfully tested. We also show that the Galileon theory must contain symmetry-breaking terms that are at most one-loop suppressed compared to the symmetry-preserving ones. We comment as well on other interesting applications of our bounds.
Beyond Positivity Bounds and the Fate of Massive Gravity.
Bellazzini, Brando; Riva, Francesco; Serra, Javi; Sgarlata, Francesco
2018-04-20
We constrain effective field theories by going beyond the familiar positivity bounds that follow from unitarity, analyticity, and crossing symmetry of the scattering amplitudes. As interesting examples, we discuss the implications of the bounds for the Galileon and ghost-free massive gravity. The combination of our theoretical bounds with the experimental constraints on the graviton mass implies that the latter is either ruled out or unable to describe gravitational phenomena, let alone to consistently implement the Vainshtein mechanism, down to the relevant scales of fifth-force experiments, where general relativity has been successfully tested. We also show that the Galileon theory must contain symmetry-breaking terms that are at most one-loop suppressed compared to the symmetry-preserving ones. We comment as well on other interesting applications of our bounds.
Robust on-off pulse control of flexible space vehicles
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi
1993-01-01
The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
2011-01-01
The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.
Liang, X B; Wang, J
2000-01-01
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.
ERIC Educational Resources Information Center
Brusco, Michael J.; Stahl, Stephanie
2005-01-01
There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…
Spherical cows in the sky with fab four
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaloper, Nemanja; Sandora, McCullen, E-mail: kaloper@physics.ucdavis.edu, E-mail: mesandora@ucdavis.edu
2014-05-01
We explore spherically symmetric static solutions in a subclass of unitary scalar-tensor theories of gravity, called the 'Fab Four' models. The weak field large distance solutions may be phenomenologically viable, but only if the Gauss-Bonnet term is negligible. Only in this limit will the Vainshtein mechanism work consistently. Further, classical constraints and unitarity bounds constrain the models quite tightly. Nevertheless, in the limits where the range of individual terms at large scales is respectively Kinetic Braiding, Horndeski, and Gauss-Bonnet, the horizon scale effects may occur while the theory satisfies Solar system constraints and, marginally, unitarity bounds. On the other hand,more » to bring the cutoff down to below a millimeter constrains all the couplings scales such that 'Fab Fours' can't be heard outside of the Solar system.« less
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Wavelet evolutionary network for complex-constrained portfolio rebalancing
NASA Astrophysics Data System (ADS)
Suganya, N. C.; Vijayalakshmi Pai, G. A.
2012-07-01
Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.
Resolving the global transpiration flux is critical to constraining global carbon cycle models because carbon uptake by photosynthesis in terrestrial plants (Gross Primary Productivity, GPP) is directly related to water lost through transpiration. Quantifying GPP globally is cha...
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Δ isobars and nuclear saturation
NASA Astrophysics Data System (ADS)
Ekström, A.; Hagen, G.; Morris, T. D.; Papenbrock, T.; Schwartz, P. D.
2018-02-01
We construct a nuclear interaction in chiral effective field theory with explicit inclusion of the Δ -isobar Δ (1232 ) degree of freedom at all orders up to next-to-next-to-leading order (NNLO). We use pion-nucleon (π N ) low-energy constants (LECs) from a Roy-Steiner analysis of π N scattering data, optimize the LECs in the contact potentials up to NNLO to reproduce low-energy nucleon-nucleon scattering phase shifts, and constrain the three-nucleon interaction at NNLO to reproduce the binding energy and point-proton radius of 4He. For heavier nuclei we use the coupled-cluster method to compute binding energies, radii, and neutron skins. We find that radii and binding energies are much improved for interactions with explicit inclusion of Δ (1232 ) , while Δ -less interactions produce nuclei that are not bound with respect to breakup into α particles. The saturation of nuclear matter is significantly improved, and its symmetry energy is consistent with empirical estimates.
Experimental quantum key distribution with finite-key security analysis for noisy channels.
Bacco, Davide; Canale, Matteo; Laurenti, Nicola; Vallone, Giuseppe; Villoresi, Paolo
2013-01-01
In quantum key distribution implementations, each session is typically chosen long enough so that the secret key rate approaches its asymptotic limit. However, this choice may be constrained by the physical scenario, as in the perspective use with satellites, where the passage of one terminal over the other is restricted to a few minutes. Here we demonstrate experimentally the extraction of secure keys leveraging an optimal design of the prepare-and-measure scheme, according to recent finite-key theoretical tight bounds. The experiment is performed in different channel conditions, and assuming two distinct attack models: individual attacks or general quantum attacks. The request on the number of exchanged qubits is then obtained as a function of the key size and of the ambient quantum bit error rate. The results indicate that viable conditions for effective symmetric, and even one-time-pad, cryptography are achievable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs
2009-12-01
50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE
Li, Yongming; Tong, Shaocheng
2017-12-01
In this paper, an adaptive fuzzy output constrained control design approach is addressed for multi-input multioutput uncertain stochastic nonlinear systems in nonstrict-feedback form. The nonlinear systems addressed in this paper possess unstructured uncertainties, unknown gain functions and unknown stochastic disturbances. Fuzzy logic systems are utilized to tackle the problem of unknown nonlinear uncertainties. The barrier Lyapunov function technique is employed to solve the output constrained problem. In the framework of backstepping design, an adaptive fuzzy control design scheme is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Wind Farm Turbine Type and Placement Optimization
NASA Astrophysics Data System (ADS)
Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan
2016-09-01
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
Wind farm turbine type and placement optimization
Graf, Peter; Dykes, Katherine; Scott, George; ...
2016-10-03
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
Use of constrained optimization in the conceptual design of a medium-range subsonic transport
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
DEGAS: Dynamic Exascale Global Address Space Programming Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less
Liu, Derong; Li, Hongliang; Wang, Ding
2015-06-01
In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Neutrino Mass Bounds from 0{nu}{beta}{beta} Decays and Large Scale Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keum, Y.-Y.; Department of Physics, National Taiwan University, Taipei, Taiwan 10672; Ichiki, K.
2008-05-21
We investigate the way how the total mass sum of neutrinos can be constrained from the neutrinoless double beta decay and cosmological probes with cosmic microwave background (WMAP 3-year results), large scale structures including 2dFGRS and SDSS data sets. First we discuss, in brief, on the current status of neutrino mass bounds from neutrino beta decays and cosmic constrain within the flat {lambda}CMD model. In addition, we explore the interacting neutrino dark-energy model, where the evolution of neutrino masses is determined by quintessence scalar filed, which is responsable for cosmic acceleration today. Assuming the flatness of the universe, the constraintmore » we can derive from the current observation is {sigma}m{sub {nu}}<0.87 eV at the 95% confidence level, which is consistent with {sigma}m{sub {nu}}<0.68 eV in the flat {lambda}CDM model.« less
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems
Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia; ...
2017-09-05
Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less
Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia
Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less
Optimal load scheduling in commercial and residential microgrids
NASA Astrophysics Data System (ADS)
Ganji Tanha, Mohammad Mahdi
Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter; Dykes, Katherine; Scott, George
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Yen, Hong-Hsu
2009-01-01
In wireless sensor networks, data aggregation routing could reduce the number of data transmissions so as to achieve energy efficient transmission. However, data aggregation introduces data retransmission that is caused by co-channel interference from neighboring sensor nodes. This kind of co-channel interference could result in extra energy consumption and significant latency from retransmission. This will jeopardize the benefits of data aggregation. One possible solution to circumvent data retransmission caused by co-channel interference is to assign different channels to every sensor node that is within each other's interference range on the data aggregation tree. By associating each radio with a different channel, a sensor node could receive data from all the children nodes on the data aggregation tree simultaneously. This could reduce the latency from the data source nodes back to the sink so as to meet the user's delay QoS. Since the number of radios on each sensor node and the number of non-overlapping channels are all limited resources in wireless sensor networks, a challenging question here is to minimize the total transmission cost under limited number of non-overlapping channels in multi-radio wireless sensor networks. This channel constrained data aggregation routing problem in multi-radio wireless sensor networks is an NP-hard problem. I first model this problem as a mixed integer and linear programming problem where the objective is to minimize the total transmission subject to the data aggregation routing, channel and radio resources constraints. The solution approach is based on the Lagrangean relaxation technique to relax some constraints into the objective function and then to derive a set of independent subproblems. By optimally solving these subproblems, it can not only calculate the lower bound of the original primal problem but also provide useful information to get the primal feasible solutions. By incorporating these Lagrangean multipliers as the link arc weight, the optimization-based heuristics are proposed to get energy-efficient data aggregation tree with better resource (channel and radio) utilization. From the computational experiments, the proposed optimization-based approach is superior to existing heuristics under all tested cases.
Temperature of Earth's core constrained from melting of Fe and Fe0.9Ni0.1 at high pressures
NASA Astrophysics Data System (ADS)
Zhang, Dongzhou; Jackson, Jennifer M.; Zhao, Jiyong; Sturhahn, Wolfgang; Alp, E. Ercan; Hu, Michael Y.; Toellner, Thomas S.; Murphy, Caitlin A.; Prakapenka, Vitali B.
2016-08-01
The melting points of fcc- and hcp-structured Fe0.9Ni0.1 and Fe are measured up to 125 GPa using laser heated diamond anvil cells, synchrotron Mössbauer spectroscopy, and a recently developed fast temperature readout spectrometer. The onset of melting is detected by a characteristic drop in the time-integrated synchrotron Mössbauer signal which is sensitive to atomic motion. The thermal pressure experienced by the samples is constrained by X-ray diffraction measurements under high pressures and temperatures. The obtained best-fit melting curves of fcc-structured Fe and Fe0.9Ni0.1 fall within the wide region bounded by previous studies. We are able to derive the γ-ɛ-l triple point of Fe and the quasi triple point of Fe0.9Ni0.1 to be 110 ± 5GPa, 3345 ± 120K and 116 ± 5GPa, 3260 ± 120K, respectively. The measured melting temperatures of Fe at similar pressure are slightly higher than those of Fe0.9Ni0.1 while their one sigma uncertainties overlap. Using previously measured phonon density of states of hcp-Fe, we calculate melting curves of hcp-structured Fe and Fe0.9Ni0.1 using our (quasi) triple points as anchors. The extrapolated Fe0.9Ni0.1 melting curve provides an estimate for the upper bound of Earth's inner core-outer core boundary temperature of 5500 ± 200K. The temperature within the liquid outer core is then approximated with an adiabatic model, which constrains the upper bound of the temperature at the core side of the core-mantle boundary to be 4000 ± 200K. We discuss a potential melting point depression caused by light elements and the implications of the presented core-mantle boundary temperature bounds on phase relations in the lowermost part of the mantle.
Temperature of Earth's core constrained from melting of Fe and Fe 0.9Ni 0.1 at high pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Dongzhou; Jackson, Jennifer M.; Zhao, Jiyong
The melting points of fcc- and hcp-structured Fe 0.9Ni 0.1 and Fe are measured up to 125 GPa using laser heated diamond anvil cells, synchrotron Mossbauer spectroscopy, and a recently developed fast temperature readout spectrometer. The onset of melting is detected by a characteristic drop in the time integrated synchrotron Mfissbauer signal which is sensitive to atomic motion. The thermal pressure experienced by the samples is constrained by X-ray diffraction measurements under high pressures and temperatures. The obtained best-fit melting curves of fcc-structured Fe and Fe 0.9Ni 0.1 fall within the wide region bounded by previous studies. We are ablemore » to derive the gamma-is an element of-1 triple point of Fe and the quasi triple point of Fe0.9Ni0.1 to be 110 ± 5 GPa, 3345 ± 120 K and 116 ± 5 GPa, 3260 ± 120 K, respectively. The measured melting temperatures of Fe at similar pressure are slightly higher than those of Fe 0.9Ni 0.1 while their one sigma uncertainties overlap. Using previously measured phonon density of states of hcp-Fe, we calculate melting curves of hcp-structured Fe and Fe 0.9Ni 0.1 using our (quasi) triple points as anchors. The extrapolated Fe 0.9Ni 0.1 melting curve provides an estimate for the upper bound of Earth's inner core-outer core boundary temperature of 5500 ± 200 K. The temperature within the liquid outer core is then approximated with an adiabatic model, which constrains the upper bound of the temperature at the core side of the core -mantle boundary to be 4000 ± 200 K. We discuss a potential melting point depression caused by light elements and the implications of the presented core -mantle boundary temperature bounds on phase relations in the lowermost part of the mantle.« less
Constraining new physics models with isotope shift spectroscopy
NASA Astrophysics Data System (ADS)
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
On optimal soft-decision demodulation
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
Wozencraft and Kennedy have suggested that the appropriate demodulator criterion of goodness is the cut-off rate of the discrete memoryless channel created by the modulation system; the criterion of goodness adopted in this note is the symmetric cut-off rate which differs from the former criterion only in that the signals are assumed equally likely. Massey's necessary condition for optimal demodulation of binary signals is generalized to M-ary signals. It is shown that the optimal demodulator decision regions in likelihood space are bounded by hyperplanes. An iterative method is formulated for finding these optimal decision regions from an initial good quess. For additive white Gaussian noise, the corresponding optimal decision regions in signal space are bounded by hypersurfaces with hyperplane asymptotes; these asymptotes themselves bound the decision regions of a demodulator which, in several examples, is shown to be virtually optimal. In many cases, the necessary condition for demodulator optimality is also sufficient, but a counter example to its general sufficiency is given.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
Optimal bounds and extremal trajectories for time averages in dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails
NASA Astrophysics Data System (ADS)
Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.
2017-02-01
An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
Unveiling ν secrets with cosmological data: Neutrino masses and mass hierarchy
NASA Astrophysics Data System (ADS)
Vagnozzi, Sunny; Giusarma, Elena; Mena, Olga; Freese, Katherine; Gerbino, Martina; Ho, Shirley; Lattanzi, Massimiliano
2017-12-01
Using some of the latest cosmological data sets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν, within the assumption of a background flat Λ CDM cosmology. In the most conservative scheme, combining Planck cosmic microwave background temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ ), the tightest 95% confidence level upper bound we find is Mν<0.151 eV . The addition of Planck high-ℓ polarization data, which, however, might still be contaminated by systematics, further tightens the bound to Mν<0.118 eV . A proper model comparison treatment shows that the two aforementioned combinations disfavor the inverted hierarchy at ˜64 % C .L . and ˜71 % C .L . , respectively. In addition, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full-shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. Our work uses only cosmological data; imposing the constraint Mν>0.06 eV from oscillations data would raise the quoted upper bounds by O (0.1 σ ) and would not affect our conclusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
Optimal synchronization in space
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-02-01
In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Trajectory optimization and guidance law development for national aerospace plane applications
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1988-01-01
The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai
2011-01-01
In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.
Bounds on light gluinos from the BEBC beam dump experiment
NASA Astrophysics Data System (ADS)
Cooper-Sarkar, A. M.; Parker, M. A.; Sarkar, S.; Aderholz, M.; Bostock, P.; Clayton, E. F.; Faccini-Turluer, M. L.; Grässler, H.; Guy, J.; Hulth, P. O.; Hultqvist, K.; Idschok, U.; Klein, H.; Kreutzmann, H.; Krstic, J.; Mobayyen, M. M.; Morrison, D. R. O.; Nellen, B.; Schmid, P.; Schmitz, N.; Talebzadeh, M.; Venus, W.; Vignaud, D.; Walck, Ch.; Wachsmuth, H.; Wünsch, B.; WA66 Collaboration
1985-10-01
Observational upper limits on anomalous neutral-current events in a proton beam dump experiment are used to constrain the possible hadroproduction and decay of light gluinos. These results require ifm g˜$̆4 GeV for ifm q˜ - minw.
Constraining properties of disintegrating exoplanets
NASA Astrophysics Data System (ADS)
Veras, D.; Carter, P. J.; Leinhardt, Z. M.; Gänsicke, B. T.
2017-09-01
Evaporating and disintegrating planets provide unique insights into chemical makeup and physical constraints. The striking variability, depth (˜10 - 60%) and shape of the photometric transit curves due to the disintegrating minor planet orbiting white dwarf WD 1145+017 has galvanised the post-main- sequence exoplanetary science community. We have performed the first tidal disruption simulations of this planetary object, and have succeeded in constraining its mass, density, eccentricity and physical nature. We illustrate how our simulations can bound these properties, and be used in the future for other exoplanetary systems.
Constraining axion dark matter with Big Bang Nucleosynthesis
Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela; ...
2014-08-04
We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN
Constraining axion dark matter with Big Bang Nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela
We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN
Degree-constrained multicast routing for multimedia communications
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugeng; Li, Guidan
2005-02-01
Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.
Li, Yongming; Ma, Zhiyao; Tong, Shaocheng
2017-09-01
The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Stretched hydrogen molecule from a constrained-search density-functional perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valone, Steven M; Levy, Mel
2009-01-01
Constrained-search density functional theory gives valuable insights into the fundamentals of density functional theory. It provides exact results and bounds on the ground- and excited-state density functionals. An important advantage of the theory is that it gives guidance in the construction of functionals. Here they engage constrained search theory to explore issues associated with the functional behavior of 'stretched bonds' in molecular hydrogen. A constrained search is performed with familiar valence bond wavefunctions ordinarily used to describe molecular hydrogen. The effective, one-electron hamiltonian is computed and compared to the corresponding uncorrelated, Hartree-Fock effective hamiltonian. Analysis of the functional suggests themore » need to construct different functionals for the same density and to allow a competition among these functions. As a result the correlation energy functional is composed explicitly of energy gaps from the different functionals.« less
Spectroscopic factors near the r-process path using (d , p) measurements at two energies
NASA Astrophysics Data System (ADS)
Walter, D.; Cizewski, J. A.; Baugher, T.; Ratkiewicz, A.; Manning, B.; Pain, S. D.; Nunes, F. M.; Ahn, S.; Cerizza, G.; Thornsberry, C.; Jones, K. L.
2016-09-01
To determine spectroscopic factors, it is necessary to use a nuclear reaction model that is dependent on the bound-state potential. A poorly constrained potential can drastically increase uncertainties in extracted spectroscopic factors. Mukhamedzhanov and Nunes have proposed a technique to mitigate this uncertainty by combining transfer reaction measurements at two energies. At peripheral reaction energies ( 5 MeV/u), the external contribution of the wave function can be reliably extracted, and then combined with the higher energy reaction ( 40 MeV/u) with a larger contribution from the interior. The two measurements will constrain the single-particle asymptotic normalization coefficient, ANC, and enable spectroscopic factors to be determined with uncertainties dominated by the cross section measurements rather than in the bound-state potential. Published measurements of 86Kr(d , p) at 5.5 MeV/u have been combined with recent results at 35 MeV/u at the NSCL using the ORRUBA and SIDAR arrays of silicon-strip detectors. Preliminary analysis shows that the single-particle ANC can be constrained. The details of the analysis and prospects for measurements with rare isotope beams will be presented. This research by the ORRUBA Collaboration is supported in part by the NSF and the U.S. DOE.
The cost-constrained traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokkappa, P.R.
1990-10-01
The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
CLFs-based optimization control for a class of constrained visual servoing systems.
Song, Xiulan; Miaomiao, Fu
2017-03-01
In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
Energy efficient LED layout optimization for near-uniform illumination
NASA Astrophysics Data System (ADS)
Ali, Ramy E.; Elgala, Hany
2016-09-01
In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
NASA Technical Reports Server (NTRS)
Hargrove, A.
1982-01-01
Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.
NASA Astrophysics Data System (ADS)
Mashaal, Heylal; Gordon, Jeffrey M.
2014-10-01
Solar rectifying antennas constitute a distinct solar power conversion paradigm where sunlight's spatial coherence is a basic constraining factor. In this presentation, we derive the fundamental thermodynamic limit for coherence-limited blackbody (principally solar) power conversion. Our results represent a natural extension of the eponymous Landsberg limit, originally derived for converters that are not constrained by the radiation's coherence, and are irradiated at maximum concentration (i.e., with a view factor of unity to the solar disk). We proceed by first expanding Landsberg's results to arbitrary solar view factor (i.e., arbitrary concentration and/or angular confinement), and then demonstrate how the results are modified when the converter can only process coherent radiation. The results are independent of the specific power conversion mechanism, and hence are valid for diffraction-limited as well as quantum converters (and not just classical heat engines or in the geometric optics regime). The derived upper bounds bode favorably for the potential of rectifying antennas as potentially high-efficiency solar converters.
On the Miller-Tucker-Zemlin Based Formulations for the Distance Constrained Vehicle Routing Problems
NASA Astrophysics Data System (ADS)
Kara, Imdat
2010-11-01
Vehicle Routing Problem (VRP), is an extension of the well known Traveling Salesman Problem (TSP) and has many practical applications in the fields of distribution and logistics. When the VRP consists of distance based constraints it is called Distance Constrained Vehicle Routing Problem (DVRP). However, the literature addressing on the DVRP is scarce. In this paper, existing two-indexed integer programming formulations, having Miller-Tucker-Zemlin based subtour elimination constraints, are reviewed. Existing formulations are simplified and obtained formulation is presented as formulation F1. It is shown that, the distance bounding constraints of the formulation F1, may not generate the distance traveled up to the related node. To do this, we redefine the auxiliary variables of the formulation and propose second formulation F2 with new and easy to use distance bounding constraints. Adaptation of the second formulation to the cases where new restrictions such as minimal distance traveled by each vehicle or other objectives such as minimizing the longest distance traveled is discussed.
Constraining the top-Higgs sector of the standard model effective field theory
NASA Astrophysics Data System (ADS)
Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.
2016-08-01
Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.
Focus: Bounded Rationality and the History of Science. Introduction.
Cowles, Henry M; Deringer, William; Dick, Stephanie; Webster, Colin
2015-09-01
Historians of science see knowledge and its claimants as constrained by myriad factors. These limitations range from the assumptions and commitments of scientific practitioners to the material and ideational contexts of their practice. The precise nature of such limits and the relations among them remains an open question in the history of science. The essays in this Focus section address this question by examining one influential portrayal of constraints--Herbert Simon's theory of "bounded rationality"--as well as the responses to which it has given rise over the last half century.
Astrophysics and cosmology confront the 17 keV neutrino
NASA Technical Reports Server (NTRS)
Kolb, Edward W.; Turner, Michael S.
1991-01-01
A host of astrophysical and cosmological arguments severely constrain the properties of a 17 keV Dirac neutrino. Such a neutrino must have interactions beyond those of the standard electroweak theory to reduce its cosmic abundance (through decay or annihilation) by a factor of two hundred. A predicament arises because the additional helicity states of the neutrino necessary to construct a Dirac mass must have interactions strong enough to evade the astrophysical bound from SN 1987A, but weak enough to avoid violating the bound from primordial nucleosynthesis.
Astrophysics and cosmology confront the 17-keV neutrino
NASA Technical Reports Server (NTRS)
Kolb, Edward W.; Turner, Michael S.
1991-01-01
A host of astrophysical and cosmological arguments severely constrain the properties of a 17 keV Dirac neutrino. Such a neutrino must have interactions beyond those of the standard electroweak theory to reduce its cosmic abundance (through decay or annihilation) by a factor of two hundred. A predicament arises because the additional helicity states of the neutrino necessary to construct a Dirac mass must have interactions strong enough to evade the astrophysical bound from SN 1987A, but weak enough to avoid violating the bound from primordial nucleosynthesis.
Viscosity bound versus the universal relaxation bound
NASA Astrophysics Data System (ADS)
Hod, Shahar
2017-10-01
For gauge theories with an Einstein gravity dual, the AdS/CFT correspondence predicts a universal value for the ratio of the shear viscosity to the entropy density, η / s = 1 / 4 π. The holographic calculations have motivated the formulation of the celebrated KSS conjecture, according to which all fluids conform to the lower bound η / s ≥ 1 / 4 π. The bound on η / s may be regarded as a lower bound on the relaxation properties of perturbed fluids and it has been the focus of much recent attention. In particular, it was argued that for a class of field theories with Gauss-Bonnet gravity dual, the shear viscosity to entropy density ratio, η / s, could violate the conjectured KSS bound. In the present paper we argue that the proposed violations of the KSS bound are strongly constrained by Bekenstein's generalized second law (GSL) of thermodynamics. In particular, it is shown that physical consistency of the Gauss-Bonnet theory with the GSL requires its coupling constant to be bounded by λGB ≲ 0 . 063. We further argue that the genuine physical bound on the relaxation properties of physically consistent fluids is ℑω(k > 2 πT) > πT, where ω and k are respectively the proper frequency and the wavenumber of a perturbation mode in the fluid.
Agrawal, Piyush; Tkatchenko, Alexandre; Kronik, Leeor
2013-08-13
We propose a nonempirical, pair-wise or many-body dispersion-corrected, optimally tuned range-separated hybrid functional. This functional retains the advantages of the optimal-tuning approach in the prediction of the electronic structure. At the same time, it gains accuracy in the prediction of binding energies for dispersively bound systems, as demonstrated on the S22 and S66 benchmark sets of weakly bound dimers.
Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen
2018-01-01
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-01-01
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-03-26
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice
ERIC Educational Resources Information Center
Howes, Andrew; Duggan, Geoffrey B.; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L.
2016-01-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are "boundedly optimal." Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and…
Constrained multi-objective optimization of storage ring lattices
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2018-03-01
The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.
Thermally-Constrained Fuel-Optimal ISS Maneuvers
NASA Technical Reports Server (NTRS)
Bhatt, Sagar; Svecz, Andrew; Alaniz, Abran; Jang, Jiann-Woei; Nguyen, Louis; Spanos, Pol
2015-01-01
Optimal Propellant Maneuvers (OPMs) are now being used to rotate the International Space Station (ISS) and have saved hundreds of kilograms of propellant over the last two years. The savings are achieved by commanding the ISS to follow a pre-planned attitude trajectory optimized to take advantage of environmental torques. The trajectory is obtained by solving an optimal control problem. Prior to use on orbit, OPM trajectories are screened to ensure a static sun vector (SSV) does not occur during the maneuver. The SSV is an indicator that the ISS hardware temperatures may exceed thermal limits, causing damage to the components. In this paper, thermally-constrained fuel-optimal trajectories are presented that avoid an SSV and can be used throughout the year while still reducing propellant consumption significantly.
Interferometric tests of Planckian quantum geometry models
Kwon, Ohkyung; Hogan, Craig J.
2016-04-19
The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less
Computational rationality: linking mechanism and behavior through bounded utility maximization.
Lewis, Richard L; Howes, Andrew; Singh, Satinder
2014-04-01
We propose a framework for including information-processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems, defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing "levels" approaches to explanation, and to other optimality-based modeling approaches. Copyright © 2014 Cognitive Science Society, Inc.
Efficient traffic grooming in SONET/WDM BLSR Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A S; Billah, A B; Wang, B
2004-04-02
In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unitmore » (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds and are much tighter than existing ones in many cases. Our proposed heuristic grooming algorithms perform very well with traffic streams of larger magnitude. The resulting number of ADMs required is very close to the corresponding lower bounds derived in this paper.« less
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
NASA Astrophysics Data System (ADS)
Dunckel, Anne E.; Cardenas, M. Bayani; Sawyer, Audrey H.; Bennett, Philip C.
2009-12-01
Microbial mats have spatially heterogeneous structured communities that manifest visually through vibrant color zonation often associated with environmental gradients. We report the first use of high-resolution thermal infrared imaging to map temperature at four hot springs within the El Tatio Geyser Field, Chile. Thermal images with millimeter resolution show drastic variability and pronounced patterning in temperature, with changes on the order of 30°C within a square decimeter. Paired temperature and visual images show that zones with specific coloration occur within distinct temperature ranges. Unlike previous studies where maximum, minimum, and optimal temperatures for microorganisms are based on isothermally-controlled laboratory cultures, thermal imaging allows for mapping thousands of temperature values in a natural setting. This allows for efficiently constraining natural temperature bounds for visually distinct mat zones. This approach expands current understanding of thermophilic microbial communities and opens doors for detailed analysis of biophysical controls on microbial ecology.
Sustained State-Independent Quantum Contextual Correlations from a Single Ion
NASA Astrophysics Data System (ADS)
Leupold, F. M.; Malinowski, M.; Zhang, C.; Negnevitsky, V.; Alonso, J.; Home, J. P.; Cabello, A.
2018-05-01
We use a single trapped-ion qutrit to demonstrate the quantum-state-independent violation of noncontextuality inequalities using a sequence of randomly chosen quantum nondemolition projective measurements. We concatenate 53 ×106 sequential measurements of 13 observables, and unambiguously violate an optimal noncontextual bound. We use the same data set to characterize imperfections including signaling and repeatability of the measurements. The experimental sequence was generated in real time with a quantum random number generator integrated into our control system to select the subsequent observable with a latency below 50 μ s , which can be used to constrain contextual hidden-variable models that might describe our results. The state-recycling experimental procedure is resilient to noise and independent of the qutrit state, substantiating the fact that the contextual nature of quantum physics is connected to measurements and not necessarily to designated states. The use of extended sequences of quantum nondemolition measurements finds applications in the fields of sensing and quantum information.
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
On the realization of the bulk modulus bounds for two-phase viscoelastic composites
NASA Astrophysics Data System (ADS)
Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole
2014-02-01
Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
A Model-Free No-arbitrage Price Bound for Variance Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu
2013-08-01
We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.
2011-03-01
Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans. Commun...vol. 51, pp. 48-51, Jan. 2003. [99] C. Ding, M. Golin, and T. Klφve, “Meeting the Welch and Karystinos-Pados bounds on DS - CDMA binary signature sets...Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [100] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA
2015-07-09
49, pp. 873-885, Apr. 2003. [23] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary...bounds on DS - CDMA binary signature sets,” Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [25] V. P. Ipatov, “On the Karystinos-Pados...bounds and optimal binary DS - CDMA signature ensembles,” IEEE Commun. Letters, vol. 8, pp. 81-83, Feb. 2004. [26] G. N. Karystinos and D. A. Pados
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
An ideal observer analysis of visual working memory.
Sims, Chris R; Jacobs, Robert A; Knill, David C
2012-10-01
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. PsycINFO Database Record (c) 2012 APA, all rights reserved.
An Ideal Observer Analysis of Visual Working Memory
Sims, Chris R.; Jacobs, Robert A.; Knill, David C.
2013-01-01
Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory. PMID:22946744
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
Shape optimization of self-avoiding curves
NASA Astrophysics Data System (ADS)
Walker, Shawn W.
2016-04-01
This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Energetic Materials Optimization via Constrained Search
2015-06-01
steps. 3. Optimization Methodology Our optimization problem is formulated as a constrained maximization: max x∈CCS P (x) s.t. : TED ( x )− 9.75 ≥ 0 SV (x)− 9...0 5− SA(x) ≥ 0, (1) where TED ( x ) is the total energy of detonation (TED) of compound x from the chosen chemical subspace (CCS) of chemical compound...max problem, max x∈CCS min λ∈R3+ P (x)− λTC(x), (2) where C(x) is the vector of constraint violations, i.e., η(9.75 − TED ( x )), η(9 − SV (x)), η(SA(x
A Decomposition Approach for Shipboard Manpower Scheduling
2009-01-01
generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates
Optimality of Thermal Expansion Bounds in Three Dimensions
Watts, Seth E.; Tortorelli, Daniel A.
2015-02-20
In this short note, we use topology optimization to design multi-phase isotropic three-dimensional composite materials with extremal combinations of isotropic thermal expansion and bulk modulus. In so doing, we provide evidence that the theoretical bounds for this combination of material properties are optimal. This has been shown in two dimensions, but not heretofore in three dimensions. Finally, we also show that restricting the design space by enforcing material symmetry by construction does not prevent one from obtaining extremal designs.
Efficiency bounds of molecular motors under a trade-off figure of merit
NASA Astrophysics Data System (ADS)
Zhang, Yanchao; Huang, Chuankun; Lin, Guoxing; Chen, Jincan
2017-05-01
On the basis of the theory of irreversible thermodynamics and an elementary model of the molecular motors converting chemical energy by ATP hydrolysis to mechanical work exerted against an external force, the efficiencies of the molecular motors at two different optimization configurations for trade-off figure of merit representing a best compromise between the useful energy and the lost energy are calculated. The upper and lower bounds for the efficiency at two different optimization configurations are determined. It is found that the optimal efficiencies at the two different optimization configurations are always larger than 1 / 2.
Irakli, Maria; Kleisiaris, Fotis; Kadoglidou, Kalliopi; Katsantonis, Dimitrios
2018-06-13
Rice by-products are extensively abundant agricultural wastes from the rice industry. This study was designed to optimize experimental conditions for maximum recovery of free and bound phenolic compounds from rice by-products. Optimized conditions were determined using response surface methodology based on total phenolic content (TPC), ABTS radical scavenging activity and ferric reducing power (FRAP). A Box-Behnken design was used to investigate the effects of ethanol concentration, extraction time and temperature, and NaOH concentration, hydrolysis time and temperature for free and bound fractions, respectively. The optimal conditions for the free phenolics were 41⁻56%, 40 °C, 10 min, whereas for bound phenolics were 2.5⁻3.6 M, 80 °C, 120 min. Under these conditions free TPC, ABTS and FRAP values in the bran were approximately 2-times higher than in the husk. However, bound TPC and FRAP values in the husk were 1.9- and 1.2-times higher than those in the bran, respectively, while bran fraction observed the highest ABTS value. Ferulic acid was most evident in the bran, whereas p -coumaric acid was mostly found in the husk. Findings from this study demonstrates that rice by-products could be exploited as valuable sources of bioactive components that could be used as ingredients of functional food and nutraceuticals.
E-novo: an automated workflow for efficient structure-based lead optimization.
Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit
2009-07-01
An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
Test of the combined method for extracting spectroscopic factors in N =50 nuclei
NASA Astrophysics Data System (ADS)
Walter, David; Cizewski, J. A.; Baugher, T.; Ratkiewicz, A.; Pain, S. D.; Nunes, F. M.; Ahn, S.; Cerizza, G.; Jones, K. L.; Manning, B.; Thornsberry, C.
2017-09-01
The single-particle properties of nuclei near shell closures and r-process waiting points can be observed using single-nucleon transfer reactions with beams of rare isotopes. However, approximations have to be made about the final bound state to extract spectroscopic information. An approach to constrain the bound state potential has been proposed by Mukhamedzhanov and Nunes. At peripheral reaction energies ( 5 MeV/u), the ANC for the nucleus can be extracted, and is combined with the same reaction at higher energies ( 40 MeV/u). These combined measurements can constrain the shape of the bound state potential, and the spectroscopic factor can be reliably extracted. To test this method, the 86Kr(d , p) reaction was performed in inverse kinematics with a 35 MeV/u beam at the National Superconducting Cyclotron Laboratory (NSCL) with the ORRUBA and SIDAR arrays of silicon strip detectors coupled to the S800 spectrometer. Successful results supported the measurement of a radioactive ion beam of 84Se at 45 MeV/u at the NSCL to be measured at the end of 2017. Results from the 86Kr(d , p) measurement will be presented as well as preparations for the upcoming 84Se(d , p) measurement. This work is supported in part by the National Science Foundation and U.S. D.O.E.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Veeraraghavan, Srikant; Mazziotti, David A
2014-03-28
We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
LISA pathfinder appreciably constrains collapse models
NASA Astrophysics Data System (ADS)
Helou, Bassam; Slagmolen, B. J. J.; McClelland, David E.; Chen, Yanbei
2017-04-01
Spontaneous collapse models are phenomological theories formulated to address major difficulties in macroscopic quantum mechanics. We place significant bounds on the parameters of the leading collapse models, the continuous spontaneous localization (CSL) model, and the Diosi-Penrose (DP) model, by using LISA Pathfinder's measurement, at a record accuracy, of the relative acceleration noise between two free-falling macroscopic test masses. In particular, we bound the CSL collapse rate to be at most (2.96 ±0.12 ) ×10-8 s-1 . This competitive bound explores a new frequency regime, 0.7 to 20 mHz, and overlaps with the lower bound 10-8 ±2 s-1 proposed by Adler in order for the CSL collapse noise to be substantial enough to explain the phenomenology of quantum measurement. Moreover, we bound the regularization cutoff scale used in the DP model to prevent divergences to be at least 40.1 ±0.5 fm , which is larger than the size of any nucleus. Thus, we rule out the DP model if the cutoff is the size of a fundamental particle.
Planck limits on non-canonical generalizations of large-field inflation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu
2017-04-01
In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
Limiting the effective mass and new physics parameters from 0 ν β β
NASA Astrophysics Data System (ADS)
Awasthi, Ram Lal; Dasgupta, Arnab; Mitra, Manimala
2016-10-01
In the light of the recent result from KamLAND-Zen (KLZ) and GERDA Phase-II, we update the bounds on the effective mass and the new physics parameters, relevant for neutrinoless double beta decay (0 ν β β ). In addition to the light Majorana neutrino exchange, we analyze beyond standard model contributions that arise in left-right symmetry and R-parity violating supersymmetry. The improved limit from KLZ constrains the effective mass of light neutrino exchange down to sub-eV mass regime 0.06 eV. Using the correlation between the 136Xe and 76 half-lives, we show that the KLZ limit individually rules out the positive claim of observation of 0 ν β β for all nuclear matrix element compilation. For the left-right symmetry and R-parity violating supersymmetry, the KLZ bound implies a factor of 2 improvement of the effective mass and the new physics parameters. The future ton scale experiments such as, nEXO will further constrain these models, in particular, will rule out standard as well as Type-II dominating LRSM inverted hierarchy scenario.
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jae Hyeok; Essig, Rouven; McDermott, Samuel D.
We consider the constraints from Supernova 1987A on particles with small couplings to the Standard Model. We discuss a model with a fermion coupled to a dark photon, with various mass relations in the dark sector; millicharged particles; dark-sector fermions with inelastic transitions; the hadronic QCD axion; and an axion-like particle that couples to Standard Model fermions with couplings proportional to their mass. In the fermion cases, we develop a new diagnostic for assessing when such a particle is trapped at large mixing angles. Our bounds for a fermion coupled to a dark photon constrain small couplings and masses <200more » MeV, and do not decouple for low fermion masses. They exclude parameter space that is otherwise unconstrained by existing accelerator-based and direct-detection searches. In addition, our bounds are complementary to proposed laboratory searches for sub-GeV dark matter, and do not constrain several "thermal" benchmark-model targets. For a millicharged particle, we exclude charges between 10^(-9) to a few times 10^(-6) in units of the electron charge; this excludes parameter space to higher millicharges and masses than previous bounds. For the QCD axion and an axion-like particle, we apply several updated nuclear physics calculations and include the energy dependence of the optical depth to accurately account for energy loss at large couplings. We rule out a hadronic axion of mass between 0.1 and a few hundred eV, or equivalently bound the PQ scale between a few times 10^4 and 10^8 GeV, closing the hadronic axion window. For an axion-like particle, our bounds disfavor decay constants between a few times 10^5 GeV up to a few times 10^8 GeV. In all cases, our bounds differ from previous work by more than an order of magnitude across the entire parameter space. We also provide estimated systematic errors due to the uncertainties of the progenitor.« less
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W
2009-03-01
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
Baryon-baryon interactions and spin-flavor symmetry from lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Winter, Frank; Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Nplqcd Collaboration
2017-12-01
Lattice quantum chromodynamics is used to constrain the interactions of two octet baryons at the S U (3 ) flavor-symmetric point, with quark masses that are heavier than those in nature (equal to that of the physical strange quark mass and corresponding to a pion mass of ≈806 MeV ). Specifically, the S -wave scattering phase shifts of two-baryon systems at low energies are obtained with the application of Lüscher's formalism, mapping the energy eigenvalues of two interacting baryons in a finite volume to the two-particle scattering amplitudes below the relevant inelastic thresholds. The leading-order low-energy scattering parameters in the two-nucleon systems that were previously obtained at these quark masses are determined with a refined analysis, and the scattering parameters in two other channels containing the Σ and Ξ baryons are constrained for the first time. It is found that the values of these parameters are consistent with an approximate S U (6 ) spin-flavor symmetry in the nuclear and hypernuclear forces that is predicted in the large-Nc limit of QCD. The two distinct S U (6 )-invariant interactions between two baryons are constrained for the first time at this value of the quark masses, and their values indicate an approximate accidental S U (16 ) symmetry. The S U (3 ) irreps containing the N N (1S0), N N (3S1) and 1/√{2 } (Ξ0n +Ξ-p )(3S1) channels unambiguously exhibit a single bound state, while the irrep containing the Σ+p (3S1) channel exhibits a state that is consistent with either a bound state or a scattering state close to threshold. These results are in agreement with the previous conclusions of the NPLQCD collaboration regarding the existence of two-nucleon bound states at this value of the quark masses.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-05-01
Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
Deterministic Reconfigurable Control Design for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.
1998-01-01
In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.
Vehicle routing problem with time windows using natural inspired algorithms
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouri, Drew Philip; Surowiec, Thomas M.
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
Kouri, Drew Philip; Surowiec, Thomas M.
2018-06-05
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain
2003-10-24
We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.
Petawatt laser absorption bounded
Levy, Matthew C.; Wilks, Scott C.; Tabak, Max; Libby, Stephen B.; Baring, Matthew G.
2014-01-01
The interaction of petawatt (1015 W) lasers with solid matter forms the basis for advanced scientific applications such as table-top particle accelerators, ultrafast imaging systems and laser fusion. Key metrics for these applications relate to absorption, yet conditions in this regime are so nonlinear that it is often impossible to know the fraction of absorbed light f, and even the range of f is unknown. Here using a relativistic Rankine-Hugoniot-like analysis, we show for the first time that f exhibits a theoretical maximum and minimum. These bounds constrain nonlinear absorption mechanisms across the petawatt regime, forbidding high absorption values at low laser power and low absorption values at high laser power. For applications needing to circumvent the absorption bounds, these results will accelerate a shift from solid targets, towards structured and multilayer targets, and lead the development of new materials. PMID:24938656
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
On computing the global time-optimal motions of robotic manipulators in the presence of obstacles
NASA Technical Reports Server (NTRS)
Shiller, Zvi; Dubowsky, Steven
1991-01-01
A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Release of bound procyanidins from cranberry pomace by alkaline hydrolysis
USDA-ARS?s Scientific Manuscript database
Procyanidins in plant products are present as extractable or unextractable/bound forms. We optimized alkaline hydrolysis conditions to liberate bound procyanidins from dried cranberry pomace. Five mL of sodium hydroxide (2, 4, or 6N) was added to 0.5 g of cranberry pomace in screw top glass tubes,...
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
NASA Astrophysics Data System (ADS)
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
Partial branch and bound algorithm for improved data association in multiframe processing
NASA Astrophysics Data System (ADS)
Poore, Aubrey B.; Yan, Xin
1999-07-01
A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
What Information Theory Says About Best Response and About Binding Contracts
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is the information-theoretic extension of conventional full- rationality game theory to bounded rational games. Here PD theory is used to investigate games in which the players use bounded rational best-response strategies. This investigation illuminates how to determine the optimal organization chart for a corporation, or more generally how to order the sequence of moves of the players / employees so as to optimize an overall objective function. It is then shown that in the continuum-time limit, bounded rational best response games result in a variant of the replicator dynamics of evolutionary game theory. This variant is then investigated for team games, in which the players share the same utility function, by showing that such continuum- limit bounded rational best response is identical to Newton-Raphson iterative optimization of the shared utility function. Next PD theory is used to investigate changing the coordinate system of the game, i.e., changing the mapping from the joint move of the players to the arguments in the utility functions. Such a change couples those arguments, essentially by making each players move be an offered binding contract.
NASA Astrophysics Data System (ADS)
Montina, Alberto; Wolf, Stefan
2014-07-01
We consider the process consisting of preparation, transmission through a quantum channel, and subsequent measurement of quantum states. The communication complexity of the channel is the minimal amount of classical communication required for classically simulating it. Recently, we reduced the computation of this quantity to a convex minimization problem with linear constraints. Every solution of the constraints provides an upper bound on the communication complexity. In this paper, we derive the dual maximization problem of the original one. The feasible points of the dual constraints, which are inequalities, give lower bounds on the communication complexity, as illustrated with an example. The optimal values of the two problems turn out to be equal (zero duality gap). By this property, we provide necessary and sufficient conditions for optimality in terms of a set of equalities and inequalities. We use these conditions and two reasonable but unproven hypotheses to derive the lower bound n ×2n -1 for a noiseless quantum channel with capacity equal to n qubits. This lower bound can have interesting consequences in the context of the recent debate on the reality of the quantum state.
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
On constraining the speed of gravitational waves following GW150914
NASA Astrophysics Data System (ADS)
Blas, D.; Ivanov, M. M.; Sawicki, I.; Sibiryakov, S.
2016-05-01
We point out that the observed time delay between the detection of the signal at the Hanford and Livingston LIGO sites from the gravitational wave event GW150914 places an upper bound on the speed of propagation of gravitational waves, c gw ≲ 1.7 in the units of speed of light. Combined with the lower bound from the absence of gravitational Cherenkov losses by cosmic rays that rules out most of subluminal velocities, this gives a model-independent double-sided constraint 1 ≲ c gw ≲ 1.7. We compare this result to model-specific constraints from pulsar timing and cosmology.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Turning Around along the Cosmic Web
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Yepes, Gustavo
2016-12-01
A bound violation designates a case in which the turnaround radius of a bound object exceeds the upper limit imposed by the spherical collapse model based on the standard ΛCDM paradigm. Given that the turnaround radius of a bound object is a stochastic quantity and that the spherical model overly simplifies the true gravitational collapse, which actually proceeds anisotropically along the cosmic web, the rarity of the occurrence of a bound violation may depend on the web environment. Assuming a Planck cosmology, we numerically construct the bound-zone peculiar velocity profiles along the cosmic web (filaments and sheets) around the isolated groups with virial mass {M}{{v}}≥slant 3× {10}13 {h}-1 {M}⊙ identified in the Small MultiDark Planck simulations and determine the radial distances at which their peculiar velocities equal the Hubble expansion speed as the turnaround radii of the groups. It is found that although the average turnaround radii of the isolated groups are well below the spherical bound limit on all mass scales, the bound violations are not forbidden for individual groups, and the cosmic web has an effect of reducing the rarity of the occurrence of a bound violation. Explaining that the spherical bound limit on the turnaround radius in fact represents the threshold distance up to which the intervention of the external gravitational field in the bound-zone peculiar velocity profiles around the nonisolated groups stays negligible, we discuss the possibility of using the threshold distance scale to constrain locally the equation of state of dark energy.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
Global optimization algorithm for heat exchanger networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quesada, I.; Grossmann, I.E.
This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less
Solution techniques for transient stability-constrained optimal power flow – Part II
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...
2017-06-28
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Solution techniques for transient stability-constrained optimal power flow – Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
Relaxation-optimized transfer of spin order in Ising spin chains
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis; Glaser, Steffen J.; Khaneja, Navin
2005-12-01
In this paper, we present relaxation optimized methods for the transfer of bilinear spin correlations along Ising spin chains. These relaxation optimized methods can be used as a building block for the transfer of polarization between distant spins on a spin chain, a problem that is ubiquitous in multidimensional nuclear magnetic resonance spectroscopy of proteins. Compared to standard techniques, significant reduction in relaxation losses is achieved by these optimized methods when transverse relaxation rates are much larger than the longitudinal relaxation rates and comparable to couplings between spins. We derive an upper bound on the efficiency of the transfer of the spin order along a chain of spins in the presence of relaxation and show that this bound can be approached by the relaxation optimized pulse sequences presented in the paper.
Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Methods for finding transition states on reduced potential energy surfaces
NASA Astrophysics Data System (ADS)
Burger, Steven K.; Ayers, Paul W.
2010-06-01
Three new algorithms are presented for determining transition state (TS) structures on the reduced potential energy surface, that is, for problems in which a few important degrees of freedom can be isolated. All three methods use constrained optimization to rapidly find the TS without an initial Hessian evaluation. The algorithms highlight how efficiently the TS can be located on a reduced surface, where the rest of the degrees of freedom are minimized. The first method uses a nonpositive definite quasi-Newton update for the reduced degrees of freedom. The second uses Shepard interpolation to fit the Hessian and starts from a set of points that bound the TS. The third directly uses a finite difference scheme to calculate the reduced degrees of freedom of the Hessian of the entire system, and searches for the TS on the full potential energy surface. All three methods are tested on an epoxide hydrolase cluster, and the ring formations of cyclohexane and cyclobutenone. The results indicate that all the methods are able to converge quite rapidly to the correct TS, but that the finite difference approach is the most efficient.
Methods for finding transition states on reduced potential energy surfaces.
Burger, Steven K; Ayers, Paul W
2010-06-21
Three new algorithms are presented for determining transition state (TS) structures on the reduced potential energy surface, that is, for problems in which a few important degrees of freedom can be isolated. All three methods use constrained optimization to rapidly find the TS without an initial Hessian evaluation. The algorithms highlight how efficiently the TS can be located on a reduced surface, where the rest of the degrees of freedom are minimized. The first method uses a nonpositive definite quasi-Newton update for the reduced degrees of freedom. The second uses Shepard interpolation to fit the Hessian and starts from a set of points that bound the TS. The third directly uses a finite difference scheme to calculate the reduced degrees of freedom of the Hessian of the entire system, and searches for the TS on the full potential energy surface. All three methods are tested on an epoxide hydrolase cluster, and the ring formations of cyclohexane and cyclobutenone. The results indicate that all the methods are able to converge quite rapidly to the correct TS, but that the finite difference approach is the most efficient.
Centralized mission planning and scheduling system for the Landsat Data Continuity Mission
Kavelaars, Alicia; Barnoy, Assaf M.; Gregory, Shawna; Garcia, Gonzalo; Talon, Cesar; Greer, Gregory; Williams, Jason; Dulski, Vicki
2014-01-01
Satellites in Low Earth Orbit provide missions with closer range for studying aspects such as geography and topography, but often require efficient utilization of space and ground assets. Optimizing schedules for these satellites amounts to a complex planning puzzle since it requires operators to face issues such as discontinuous ground contacts, limited onboard memory storage, constrained downlink margin, and shared ground antenna resources. To solve this issue for the Landsat Data Continuity Mission (LDCM, Landsat 8), all the scheduling exchanges for science data request, ground/space station contact, and spacecraft maintenance and control will be coordinated through a centralized Mission Planning and Scheduling (MPS) engine, based upon GMV’s scheduling system flexplan9 . The synchronization between all operational functions must be strictly maintained to ensure efficient mission utilization of ground and spacecraft activities while working within the bounds of the space and ground resources, such as Solid State Recorder (SSR) and available antennas. This paper outlines the functionalities that the centralized planning and scheduling system has in its operational control and management of the Landsat 8 spacecraft.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Constrained simultaneous multi-state reconfigurable wing structure configuration optimization
NASA Astrophysics Data System (ADS)
Snyder, Matthew
A reconfigurable aircraft is capable of in-flight shape change to increase mission performance or provide multi-mission capability. Reconfigurability has always been a consideration in aircraft design, from the Wright Flyer, to the F-14, and most recently the Lockheed-Martin folding wing concept. The Wright Flyer used wing-warping for roll control, the F-14 had a variable-sweep wing to improve supersonic flight capabilities, and the Lockheed-Martin folding wing demonstrated radical in-flight shape change. This dissertation will examine two questions that aircraft reconfigurability raises, especially as reconfiguration increases in complexity. First, is there an efficient method to develop a light weight structure which supports all the loads generated by each configuration? Second, can this method include the capability to propose a sub-structure topology that weighs less than other considered designs? The first question requires a method that will design and optimize multiple configurations of a reconfigurable aerostructure. Three options exist, this dissertation will show one is better than the others. Simultaneous optimization considers all configurations and their respective load cases and constraints at the same time. Another method is sequential optimization which considers each configuration of the vehicle one after the other - with the optimum design variable values from the first configuration becoming the lower bounds for subsequent configurations. This process repeats for each considered configuration and the lower bounds update as necessary. The third approach is aggregate combination — this method keeps the thickness or area of each member for the most critical configuration, the configuration that requires the largest cross-section. This research will show that simultaneous optimization produces a lower weight and different topology for the considered structures when compared to the sequential and aggregate techniques. To answer the second question, the developed optimization algorithm combines simultaneous optimization with a new method for determining the optimum location of the structural members of the sub-structure. The method proposed here considers an over-populated structural model, one in which there are initially more members than necessary. Using a unique iterative process, the optimization algorithm removes members from the design if they do not carry enough load to justify their presence. The initial set of members includes ribs, spars and a series of cross-members that diagonally connect the ribs and spars. The final result is a different structure, which is lower weight than one developed from sequential optimization or aggregate combination, and suggests the primary load paths. Chapter 1 contains background information on reconfigurable aircraft and a description of the new reconfigurable air vehicle being considered by the Air Vehicles Directorate of the Air Force Research Laboratory. This vehicle serves as a platform to test the proposed optimization process. Chapters 2 and 3 overview the optimization method and Chapter 4 provides some background analysis which is unique to this particular reconfigurable air vehicle. Chapter 5 contains the results of the optimizations and demonstrates how changing constraints or initial configuration impacts the final weight and topology of the wing structure. The final chapter contains conclusions and comments on some future work which would further enhance the effectiveness of the simultaneous reconfigurable structural topology optimization process developed and used in this dissertation.
Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Yamaguchi, Yuya
2015-09-01
We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.
Constrained variational calculus for higher order classical field theories
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; de León, Manuel; Martín de Diego, David
2010-11-01
We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Dynamic optimization and its relation to classical and quantum constrained systems
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
Observational Role of Dark Matter in f(R) Models for Structure Formation
NASA Astrophysics Data System (ADS)
Verma, Murli Manohar; Yadav, Bal Krishna
The fixed points for the dynamical system in the phase space have been calculated with dark matter in the f(R) gravity models. The stability conditions of these fixed points are obtained in the ongoing accelerated phase of the universe, and the values of the Hubble parameter and Ricci scalar are obtained for various evolutionary stages of the universe. We present a range of some modifications of general relativistic action consistent with the ΛCDM model. We elaborate upon the fact that the upcoming cosmological observations would further constrain the bounds on the possible forms of f(R) with greater precision that could in turn constrain the search for dark matter in colliders.
Constraining Light-Quark Yukawa Couplings from Higgs Distributions.
Bishara, Fady; Haisch, Ulrich; Monni, Pier Francesco; Re, Emanuele
2017-03-24
We propose a novel strategy to constrain the bottom and charm Yukawa couplings by exploiting Large Hadron Collider (LHC) measurements of transverse momentum distributions in Higgs production. Our method does not rely on the reconstruction of exclusive final states or heavy-flavor tagging. Compared to other proposals, it leads to an enhanced sensitivity to the Yukawa couplings due to distortions of the differential Higgs spectra from emissions which either probe quark loops or are associated with quark-initiated production. We derive constraints using data from LHC run I, and we explore the prospects of our method at future LHC runs. Finally, we comment on the possibility of bounding the strange Yukawa coupling.
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Constraining Light-Quark Yukawa Couplings from Higgs Distributions
NASA Astrophysics Data System (ADS)
Bishara, Fady; Haisch, Ulrich; Monni, Pier Francesco; Re, Emanuele
2017-03-01
We propose a novel strategy to constrain the bottom and charm Yukawa couplings by exploiting Large Hadron Collider (LHC) measurements of transverse momentum distributions in Higgs production. Our method does not rely on the reconstruction of exclusive final states or heavy-flavor tagging. Compared to other proposals, it leads to an enhanced sensitivity to the Yukawa couplings due to distortions of the differential Higgs spectra from emissions which either probe quark loops or are associated with quark-initiated production. We derive constraints using data from LHC run I, and we explore the prospects of our method at future LHC runs. Finally, we comment on the possibility of bounding the strange Yukawa coupling.
Time-response shaping using output to input saturation transformation
NASA Astrophysics Data System (ADS)
Chambon, E.; Burlion, L.; Apkarian, P.
2018-03-01
For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.
Constraining astrophysical neutrino flavor composition from leptonic unitarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xun-Jie; He, Hong-Jian; Rodejohann, Werner, E-mail: xunjie.xu@gmail.com, E-mail: hjhe@tsinghua.edu.cn, E-mail: werner.rodejohann@mpi-hd.mpg.de
2014-12-01
The recent IceCube observation of ultra-high-energy astrophysical neutrinos has begun the era of neutrino astronomy. In this work, using the unitarity of leptonic mixing matrix, we derive nontrivial unitarity constraints on the flavor composition of astrophysical neutrinos detected by IceCube. Applying leptonic unitarity triangles, we deduce these unitarity bounds from geometrical conditions, such as triangular inequalities. These new bounds generally hold for three flavor neutrinos, and are independent of any experimental input or the pattern of lepton mixing. We apply our unitarity bounds to derive general constraints on the flavor compositions for three types of astrophysical neutrino sources (and theirmore » general mixture), and compare them with the IceCube measurements. Furthermore, we prove that for any sources without ν{sub τ} neutrinos, a detected ν{sub μ} flux ratio < 1/4 will require the initial flavor composition with more ν{sub e} neutrinos than ν{sub μ} neutrinos.« less
NASA Technical Reports Server (NTRS)
Dahl, Roy W.; Keating, Karen; Salamone, Daryl J.; Levy, Laurence; Nag, Barindra; Sanborn, Joan A.
1987-01-01
This paper presents an algorithm (WHAMII) designed to solve the Artificial Intelligence Design Challenge at the 1987 AIAA Guidance, Navigation and Control Conference. The problem under consideration is a stochastic generalization of the traveling salesman problem in which travel costs can incur a penalty with a given probability. The variability in travel costs leads to a probability constraint with respect to violating the budget allocation. Given the small size of the problem (eleven cities), an approach is considered that combines partial tour enumeration with a heuristic city insertion procedure. For computational efficiency during both the enumeration and insertion procedures, precalculated binomial probabilities are used to determine an upper bound on the actual probability of violating the budget constraint for each tour. The actual probability is calculated for the final best tour, and additional insertions are attempted until the actual probability exceeds the bound.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
NASA Astrophysics Data System (ADS)
Sanghyun, Ahn; Seungwoong, Ha; Kim, Soo Yong
2016-06-01
A vital challenge for many socioeconomic systems is determining the optimum use of limited information. Traffic systems, wherein the range of resources is limited, are a particularly good example of this challenge. Based on bounded information accessibility in terms of, for example, high costs or technical limitations, we develop a new optimization strategy to improve the efficiency of a traffic system with signals and intersections. Numerous studies, including the study by Chowdery and Schadschneider (whose method we denote by ChSch), have attempted to achieve the maximum vehicle speed or the minimum wait time for a given traffic condition. In this paper, we introduce a modified version of ChSch with an independently functioning, decentralized control system. With the new model, we determine the optimization strategy under bounded information accessibility, which proves the existence of an optimal point for phase transitions in the system. The paper also provides insight that can be applied by traffic engineers to create more efficient traffic systems by analyzing the area and symmetry of local sites. We support our results with a statistical analysis using empirical traffic data from Seoul, Korea.
Performance bounds for nonlinear systems with a nonlinear ℒ2-gain property
NASA Astrophysics Data System (ADS)
Zhang, Huan; Dower, Peter M.
2012-09-01
Nonlinear ℒ2-gain is a finite gain concept that generalises the notion of conventional (linear) finite ℒ2-gain to admit the application of ℒ2-gain analysis tools of a broader class of nonlinear systems. The computation of tight comparison function bounds for this nonlinear ℒ2-gain property is important in applications such as small gain design. This article presents an approximation framework for these comparison function bounds through the formulation and solution of an optimal control problem. Key to the solution of this problem is the lifting of an ℒ2-norm input constraint, which is facilitated via the introduction of an energy saturation operator. This admits the solution of the optimal control problem of interest via dynamic programming and associated numerical methods, leading to the computation of the proposed bounds. Two examples are presented to demonstrate this approach.
Thermodynamics of Computational Copying in Biochemical Systems
NASA Astrophysics Data System (ADS)
Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein
2017-04-01
Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.
Midfield wireless powering of subwavelength autonomous devices.
Kim, Sanghoek; Ho, John S; Poon, Ada S Y
2013-05-17
We obtain an analytical bound on the efficiency of wireless power transfer to a weakly coupled device. The optimal source is solved for a multilayer geometry in terms of a representation based on the field equivalence principle. The theory reveals that optimal power transfer exploits the properties of the midfield to achieve efficiencies far greater than conventional coil-based designs. As a physical realization of the source, we present a slot array structure whose performance closely approaches the theoretical bound.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Manhard, Mary Kate; Harkins, Kevin D; Gochberg, Daniel F; Nyman, Jeffry S; Does, Mark D
2017-03-01
MRI of cortical bone has the potential to offer new information about fracture risk. Current methods are typically performed with 3D acquisitions, which suffer from long scan times and are generally limited to extremities. This work proposes using 2D UTE with half pulses for quantitatively mapping bound and pore water in cortical bone. Half-pulse 2D UTE methods were implemented on a 3T Philips Achieva scanner using an optimized slice-select gradient waveform, with preparation pulses to selectively image bound or pore water. The 2D methods were quantitatively compared with previously implemented 3D methods in the tibia in five volunteers. The mean difference between bound and pore water concentration acquired from 3D and 2D sequences was 0.6 and 0.9 mol 1 H/L bone (3 and 12%, respectively). While 2D pore water methods tended to slightly overestimate concentrations relative to 3D methods, differences were less than scan-rescan uncertainty and expected differences between healthy and fracture-prone bones. Quantitative bound and pore water concentration mapping in cortical bone can be accelerated by 2 orders of magnitude using 2D protocols with optimized half-pulse excitation. Magn Reson Med 77:945-950, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz
2016-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.
Variational Gaussian approximation for Poisson data
NASA Astrophysics Data System (ADS)
Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen
2018-02-01
The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.
NASA Astrophysics Data System (ADS)
Tyson, Jon
2009-03-01
We prove a concise factor-of-2 estimate for the failure rate of optimally distinguishing an arbitrary ensemble of mixed quantum states, generalizing work of Holevo [Theor. Probab. Appl. 23, 411 (1978)] and Curlander [Ph.D. Thesis, MIT, 1979]. A modification to the minimal principle of Cocha and Poor [Proceedings of the 6th International Conference on Quantum Communication, Measurement, and Computing (Rinton, Princeton, NJ, 2003)] is used to derive a suboptimal measurement which has an error rate within a factor of 2 of the optimal by construction. This measurement is quadratically weighted and has appeared as the first iterate of a sequence of measurements proposed by Ježek et al. [Phys. Rev. A 65, 060301 (2002)]. Unlike the so-called pretty good measurement, it coincides with Holevo's asymptotically optimal measurement in the case of nonequiprobable pure states. A quadratically weighted version of the measurement bound by Barnum and Knill [J. Math. Phys. 43, 2097 (2002)] is proven. Bounds on the distinguishability of syndromes in the sense of Schumacher and Westmoreland [Phys. Rev. A 56, 131 (1997)] appear as a corollary. An appendix relates our bounds to the trace-Jensen inequality.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
NASA Astrophysics Data System (ADS)
Milton, Graeme W.; Camar-Eddine, Mohamed
2018-05-01
For a composite containing one isotropic elastic material, with positive Lame moduli, and void, with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average stress, σ0 , Gibiansky, Cherkaev, and Allaire provided a sharp lower bound Wf(σ0) on the minimum compliance energy σ0 :ɛ0 , in which ɛ0 is the average strain. Here we show these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites, and thus solve the weak G-closure problem for 3d-printed materials. The materials we use to achieve the extremal (σ0 ,ɛ0) -pairs are denoted as near optimal pentamodes. We also consider two-phase composites containing this isotropic elasticity material and a rigid phase with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average strain, ɛ0. For such composites, Allaire and Kohn provided a sharp lower bound W˜f(ɛ0) on the minimum elastic energy σ0 :ɛ0 . We show that these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites of the elastic and rigid phases, and thus solve the weak G-closure problem in this case too. The materials we use to achieve these extremal (σ0 ,ɛ0) -pairs are denoted as near optimal unimodes.
Astrophysical Model Selection in Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Constraining star formation through redshifted CO and CII emission in archival CMB data
NASA Astrophysics Data System (ADS)
Switzer, Eric
LCDM is a strikingly successful paradigm to explain the CMB anisotropy and its evolution into observed galaxy clustering statistics. The formation and evolution of galaxies within this context is more complex and only partly characterized. Measurements of the average star formation and its precursors over cosmic time are required to connect theories of galaxy evolution to LCDM evolution. The fine structure transition in CII at 158 um traces star formation rates and the ISM radiation environment. Cold, molecular gas fuels star formation and is traced well by a ladder of CO emission lines. Catalogs of emission lines in individual galaxies have provided the most information about CII and CO to-date but are subject to selection effects. Intensity mapping is an alternative approach to measuring line emission. It surveys the sum of all line radiation as a function of redshift, and requires angular resolution to reach cosmologically interesting scales, but not to resolve individual sources. It directly measures moments of the luminosity function from all emitting objects. Intensity mapping of CII and CO can perform an unbiased census of stars and cold gas across cosmic time. We will use archival COBE-FIRAS and Planck data to bound or measure cosmologically redshifted CII and CO line emission through 1) the monopole spectrum, 2) cross-power between FIRAS/Planck and public galaxy survey catalogs from BOSS and the 2MASS redshift surveys, 3) auto-power of the FIRAS/Planck data itself. FIRAS is unique in its spectral range and all-sky coverage, provided by the space-borne FTS architecture. In addition to sensitivity to a particular emission line, intensity mapping is sensitive to all other contributions to surface brightness. We will remove CMB and foreground spatial and spectral templates using models from WMAP and Planck data. Interlopers and residual foregrounds additively bias the auto-power and monopole, but both can still be used to provide rigorous upper bounds. The cross-power with galaxy surveys directly constrains the redshifted line emission. Residual foregrounds and interlopers increase errors but do not add bias. There are 300 resolution elements of the 7 degree FIRAS top-hat inside the BOSS quasar survey, spanning 66 spectral pixels to z 2. While FIRAS noise per voxel is 200 times brighter than the expected peak cosmological CII emission, strt-N averaging of spatial and spectral modes above results in a gain of 140. Intensity mapping is in its infancy, with predictions for surface brightness of line emission ranging over an order of magnitude, and limited knowledge of the intensity-weighted bias. Even if only upper bounds are possible, they complement existing measurements of individual galaxies, which can constitute a lower bound because they measure only a portion of the luminosity function. FIRAS and Planck provide unique opportunities to pursue CII and CO intensity mapping with well-characterized instruments that overlap with galaxy surveys in angular coverage and redshift. We will re-analyze the FIRAS data to optimize sensitivity and robustness, developing a spectral line response model, splitting the data into sub-missions to isolate noise properties, and re- evaluating data cuts. The tools and results here will support future survey concepts with significantly lower noise, such as PIXIE, PRISM, SPHEREX and proposed suborbital experiments designed specifically for intensity mapping. There is a growing appreciation that many phenomena could lie just below the published FIRAS bounds. The proposed work is an early step toward this new science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Herrero-Garcia, Juan; Schwetz, Thomas, E-mail: emb@kth.se, E-mail: juhg@kth.se, E-mail: schwetz@fysik.su.se
We show that a positive signal in a dark matter (DM) direct detection experiment can be used to place a lower bound on the DM capture rate in the Sun, independent of the DM halo. For a given particle physics model and DM mass we obtain a lower bound on the capture rate independent of the local DM density, velocity distribution, galactic escape velocity, as well as the scattering cross section. We illustrate this lower bound on the capture rate by assuming that upcoming direct detection experiments will soon obtain a significant signal. When comparing the lower bound on themore » capture rate with limits on the high-energy neutrino flux from the Sun from neutrino telescopes, we can place upper limits on the branching fraction of DM annihilation channels leading to neutrinos. With current data from IceCube and Super-Kamiokande non-trivial limits can be obtained for spin-dependent interactions and direct annihilations into neutrinos. In some cases also annihilations into ττ or b b start getting constrained. For spin-independent interactions current constraints are weak, but they may become interesting for data from future neutrino telescopes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Herrero-Garcia, Juan; Schwetz, Thomas
We show that a positive signal in a dark matter (DM) direct detection experiment can be used to place a lower bound on the DM capture rate in the Sun, independent of the DM halo. For a given particle physics model and DM mass we obtain a lower bound on the capture rate independent of the local DM density, velocity distribution, galactic escape velocity, as well as the scattering cross section. We illustrate this lower bound on the capture rate by assuming that upcoming direct detection experiments will soon obtain a significant signal. When comparing the lower bound on themore » capture rate with limits on the high-energy neutrino flux from the Sun from neutrino telescopes, we can place upper limits on the branching fraction of DM annihilation channels leading to neutrinos. With current data from IceCube and Super-Kamiokande non-trivial limits can be obtained for spin-dependent interactions and direct annihilations into neutrinos. In some cases also annihilations into ττ or bb start getting constrained. For spin-independent interactions current constraints are weak, but they may become interesting for data from future neutrino telescopes.« less
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
Optimal lifting ascent trajectories for the space shuttle
NASA Technical Reports Server (NTRS)
Rau, T. R.; Elliott, J. R.
1972-01-01
The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.
Hatton, Leslie; Warr, Gregory
2015-01-01
That the physicochemical properties of amino acids constrain the structure, function and evolution of proteins is not in doubt. However, principles derived from information theory may also set bounds on the structure (and thus also the evolution) of proteins. Here we analyze the global properties of the full set of proteins in release 13-11 of the SwissProt database, showing by experimental test of predictions from information theory that their collective structure exhibits properties that are consistent with their being guided by a conservation principle. This principle (Conservation of Information) defines the global properties of systems composed of discrete components each of which is in turn assembled from discrete smaller pieces. In the system of proteins, each protein is a component, and each protein is assembled from amino acids. Central to this principle is the inter-relationship of the unique amino acid count and total length of a protein and its implications for both average protein length and occurrence of proteins with specific unique amino acid counts. The unique amino acid count is simply the number of distinct amino acids (including those that are post-translationally modified) that occur in a protein, and is independent of the number of times that the particular amino acid occurs in the sequence. Conservation of Information does not operate at the local level (it is independent of the physicochemical properties of the amino acids) where the influences of natural selection are manifest in the variety of protein structure and function that is well understood. Rather, this analysis implies that Conservation of Information would define the global bounds within which the whole system of proteins is constrained; thus it appears to be acting to constrain evolution at a level different from natural selection, a conclusion that appears counter-intuitive but is supported by the studies described herein.
Artifact reduction in short-scan CBCT by use of optimization-based reconstruction
Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan
2017-01-01
Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Representations of Intervals and Optimal Error Bounds.
1980-07-01
OAA629-8O-C-0ONI UNCLASS I FI IEDMRC TSR-2098 NL 11111L 3 -2 11111 ~ 13.6 1111 125 .4 111.6 MCROCOPY RESOLUTION TEST CHART NATIONA’ 13UREAU OF STANDARDS...geometric and harmonic means, Excess width Work Unit Number 3 (Numerical Analysis and Computer Science) Sponsored by the United States Army under...example in the next section, following which the general theory will be dis- cussed. 3 . An example of an optimal point and error bound. A simple
NASA Astrophysics Data System (ADS)
Yassaghi, A.; Naeimi, A.
2011-08-01
Analysis of the Gachsar structural sub-zone has been carried out to constrain structural evolution of the central Alborz range situated in the central Alpine Himalayan orogenic system. The sub-zone bounded by the northward-dipping Kandovan Fault to the north and the southward-dipping Taleghan Fault to the south is transversely cut by several sinistral faults. The Kandovan Fault that controls development of the Eocene rocks in its footwall from the Paleozoic-Mesozoic units in the fault hanging wall is interpreted as an inverted basin-bounding fault. Structural evidences include the presence of a thin-skinned imbricate thrust system propagated from a detachment zone that acts as a footwall shortcut thrust, development of large synclines in the fault footwall as well as back thrusts and pop-up structures on the fault hanging wall. Kinematics of the inverted Kandovan Fault and its accompanying structures constrain the N-S shortening direction proposed for the Alborz range until Late Miocene. The transverse sinistral faults that are in acute angle of 15° to a major magnetic lineament, which represents a basement fault, are interpreted to develop as synthetic Riedel shears on the cover sequences during reactivation of the basement fault. This overprinting of the transverse faults on the earlier inverted extensional fault occurs since the Late Miocene when the south Caspian basin block attained a SSW movement relative to the central Iran. Therefore, recent deformation in the range is a result of the basement transverse-fault reactivation.
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)
2002-01-01
In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
Hacker, David E; Hoinka, Jan; Iqbal, Emil S; Przytycka, Teresa M; Hartman, Matthew C T
2017-03-17
Highly constrained peptides such as the knotted peptide natural products are promising medicinal agents because of their impressive biostability and potent activity. Yet, libraries of highly constrained peptides are challenging to prepare. Here, we present a method which utilizes two robust, orthogonal chemical steps to create highly constrained bicyclic peptide libraries. This technology was optimized to be compatible with in vitro selections by mRNA display. We performed side-by-side monocyclic and bicyclic selections against a model protein (streptavidin). Both selections resulted in peptides with mid-nanomolar affinity, and the bicyclic selection yielded a peptide with remarkable protease resistance.
NASA Astrophysics Data System (ADS)
Lee, Dae Young
The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.
Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?
Newberry, Mitchell G.; Savage, Van M.
2016-01-01
Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Baryon-baryon interactions and spin-flavor symmetry from lattice quantum chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagman, Michael L.; Winter, Frank; Chang, Emmanuel
Lattice quantum chromodynamics is used to constrain the interactions of two octet baryons at the SU(3) flavor-symmetric point, with quark masses that are heavier than those in nature (equal to that of the physical strange quark mass and corresponding to a pion mass ofmore » $$\\approx 806~\\tt{MeV}$$). Specifically, the S-wave scattering phase shifts of two-baryon systems at low energies are obtained with the application of L\\"uscher's formalism, mapping the energy eigenvalues of two interacting baryons in a finite volume to the two-particle scattering amplitudes below the relevant inelastic thresholds. The values of the leading-order low-energy scattering parameters in the irreducible representations of SU(3) are consistent with an approximate SU(6) spin-flavor symmetry in the nuclear and hypernuclear forces that is predicted in the large-$$N_c$$ limit of QCD. The two distinct SU(6)-invariant interactions between two baryons are constrained at this value of the quark masses, and their values indicate an approximate accidental SU(16) symmetry. The SU(3) irreducible representations containing the $$NN~({^1}S_0)$$, $$NN~({^3}S_1)$$ and $$\\frac{1}{\\sqrt{2}}(\\Xi^0n+\\Xi^-p)~({^3}S_1)$$ channels unambiguously exhibit a single bound state, while the irreducible representation containing the $$\\Sigma^+ p~({^3}S_1)$$ channel exhibits a state that is consistent with either a bound state or a scattering state close to threshold. These results are in agreement with the previous conclusions of the NPLQCD collaboration regarding the existence of two-nucleon bound states at this value of the quark masses.« less
Baryon-baryon interactions and spin-flavor symmetry from lattice quantum chromodynamics
Wagman, Michael L.; Winter, Frank; Chang, Emmanuel; ...
2017-12-28
Lattice quantum chromodynamics is used to constrain the interactions of two octet baryons at the SU(3) flavor-symmetric point, with quark masses that are heavier than those in nature (equal to that of the physical strange quark mass and corresponding to a pion mass ofmore » $$\\approx 806~\\tt{MeV}$$). Specifically, the S-wave scattering phase shifts of two-baryon systems at low energies are obtained with the application of L\\"uscher's formalism, mapping the energy eigenvalues of two interacting baryons in a finite volume to the two-particle scattering amplitudes below the relevant inelastic thresholds. The values of the leading-order low-energy scattering parameters in the irreducible representations of SU(3) are consistent with an approximate SU(6) spin-flavor symmetry in the nuclear and hypernuclear forces that is predicted in the large-$$N_c$$ limit of QCD. The two distinct SU(6)-invariant interactions between two baryons are constrained at this value of the quark masses, and their values indicate an approximate accidental SU(16) symmetry. The SU(3) irreducible representations containing the $$NN~({^1}S_0)$$, $$NN~({^3}S_1)$$ and $$\\frac{1}{\\sqrt{2}}(\\Xi^0n+\\Xi^-p)~({^3}S_1)$$ channels unambiguously exhibit a single bound state, while the irreducible representation containing the $$\\Sigma^+ p~({^3}S_1)$$ channel exhibits a state that is consistent with either a bound state or a scattering state close to threshold. These results are in agreement with the previous conclusions of the NPLQCD collaboration regarding the existence of two-nucleon bound states at this value of the quark masses.« less
A hierarchical transition state search algorithm
NASA Astrophysics Data System (ADS)
del Campo, Jorge M.; Köster, Andreas M.
2008-07-01
A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.
Bounding the space of holographic CFTs with chaos
Perlmutter, Eric
2016-10-13
In this study, thermal states of quantum systems with many degrees of freedom are subject to a bound on the rate of onset of chaos, including a bound on the Lyapunov exponent, λ L ≤ 2π/β. We harness this bound to constrain the space of putative holographic CFTs and their would-be dual theories of AdS gravity. First, by studying out-of-time-order four-point functions, we discuss how λ L = 2π/β in ordinary two-dimensional holographic CFTs is related to properties of the OPE at strong coupling. We then rule out the existence of unitary, sparse two-dimensional CFTs with large central charge andmore » a set of higher spin currents of bounded spin; this implies the inconsistency of weakly coupled AdS 3 higher spin gravities without infinite towers of gauge fields, such as the SL(N) theories. This fits naturally with the structure of higher-dimensional gravity, where finite towers of higher spin fields lead to acausality. On the other hand, unitary CFTs with classical W ∞[λ] symmetry, dual to 3D Vasiliev or hs[λ] higher spin gravities, do not violate the chaos bound, instead exhibiting no chaos: λ L = 0. Independently, we show that such theories violate unitarity for |λ| > 2. These results encourage a tensionless string theory interpretation of the 3D Vasiliev theory.« less
Information technologies for taking into account risks in business development programme
NASA Astrophysics Data System (ADS)
Kalach, A. V.; Khasianov, R. R.; Rossikhina, L. V.; Zybin, D. G.; Melnik, A. A.
2018-05-01
The paper describes the information technologies for taking into account risks in business development programme, which rely on the algorithm for assessment of programme project risks and the algorithm of programme forming with constrained financing of high-risk projects taken into account. A method of lower-bound estimate is suggested for subsets of solutions. The corresponding theorem and lemma and their proofs are given.
Top ten models constrained by b {yields} s{gamma}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, J.L.
1994-12-01
The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.
Bounded-Influence Inference in Regression.
1984-02-01
be viewed as generalization of the classical F-test. By means of the influence function their robustness properties are investigated and optimally...robust tests that maximize the asymptotic power within each class, under the side condition of a bounded influence function , are constructed. Finally, an
Thermodynamic constraint on the depth of the global tropospheric circulation.
Thompson, David W J; Bony, Sandrine; Li, Ying
2017-08-01
The troposphere is the region of the atmosphere characterized by low static stability, vigorous diabatic mixing, and widespread condensational heating in clouds. Previous research has argued that in the tropics, the upper bound on tropospheric mixing and clouds is constrained by the rapid decrease with height of the saturation water vapor pressure and hence radiative cooling by water vapor in clear-sky regions. Here the authors contend that the same basic physics play a key role in constraining the vertical structure of tropospheric mixing, tropopause temperature, and cloud-top temperature throughout the globe. It is argued that radiative cooling by water vapor plays an important role in governing the depth and amplitude of large-scale dynamics at extratropical latitudes.
Constraints on Massive Axion-Like Particles from X-ray Observations of NGC1275
NASA Astrophysics Data System (ADS)
Chen, Linhan; Conlon, Joseph P.
2018-06-01
If axion-like particles (ALPs) exist, photons can convert to ALPs on passage through regions containing magnetic fields. The magnetised intracluster medium of large galaxy clusters provides a region that is highly efficient at ALP-photon conversion. X-ray observations of Active Galactic Nuclei (AGNs) located within galaxy clusters can be used to search for and constrain ALPs, as photon-ALP conversion would lead to energy-dependent quasi-sinusoidal modulations in the X-ray spectrum of an AGN. We use Chandra observations of the central AGN of the Perseus Cluster, NGC1275, to place bounds on massive ALPs up to ma ˜ 10-11eV, extending previous work that used this dataset to constrain massless ALPs.
NASA Astrophysics Data System (ADS)
Jungman, Gerard
1992-11-01
Yukawa-coupling-constant unification together with the known fermion masses is used to constrain SO(10) models. We consider the case of one (heavy) generation, with the tree-level relation mb=mτ, calculating the limits on the intermediate scales due to the known limits on fermion masses. This analysis extends previous analyses which addressed only the simplest symmetry-breaking schemes. In the case where the low-energy model is the standard model with one Higgs doublet, there are very strong constraints due to the known limits on the top-quark mass and the τ-neutrino mass. The two-Higgs-doublet case is less constrained, though we can make progress in constraining this model also. We identify those parameters to which the viability of the model is most sensitive. We also discuss the ``triviality'' bounds on mt obtained from the analysis of the Yukawa renormalization-group equations. Finally we address the role of a speculative constraint on the τ-neutrino mass, arising from the cosmological implications of anomalous B+L violation in the early Universe.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
Vyumvuhore, Raoul; Tfayli, Ali; Duplan, Hélène; Delalleau, Alexandre; Manfait, Michel; Baillet-Guffroy, Arlette
2013-07-21
Skin hydration plays an important role in the optimal physical properties and physiological functions of the skin. Despite the advancements in the last decade, dry skin remains the most common characteristic of human skin disorders. Thus, it is important to understand the effect of hydration on Stratum Corneum (SC) components. In this respect, our interest consists in correlating the variations of unbound and bound water content in the SC with structural and organizational changes in lipids and proteins using a non-invasive technique: Raman spectroscopy. Raman spectra were acquired on human SC at different relative humidity (RH) levels (4-75%). The content of different types of water, bound and free, was measured using the second derivative and curve fitting of the Raman bands in the range of 3100-3700 cm(-1). Changes in lipidic order were evaluated using νC-C and νC-H. To analyze the effect of RH on the protein structure, we examined in the Amide I region, the Fermi doublet of tyrosine, and the νasymCH3 vibration. The contributions of totally bound water were found not to vary with humidity, while partially bound water varied with three different rates. Unbound water increased greatly when all sites for bound water were saturated. Lipid organization as well as protein deployment was found to be optimal at intermediate RH values (around 60%), which correspond to the maximum of SC water binding capacity. This analysis highlights the relationship between bound water, the SC barrier state and the protein structure and elucidates the optimal conditions. Moreover, our results showed that increased content of unbound water in the SC induces disorder in the structures of lipids and proteins.
Akwabi-Ameyaw, Adwoa; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Wisely, G Bruce
2011-10-15
To further explore the optimum placement of the acid moiety in conformationally constrained analogs of GW 4064 1a, a series of stilbene replacements were prepared. The benzothiophene 1f and the indole 1g display the optimal orientation of the carboxylate for enhanced FXR agonist potency. Copyright © 2011 Elsevier Ltd. All rights reserved.
Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signalsmore » are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br
We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
An approach to optimal semi-active control of vibration energy harvesting based on MEMS
NASA Astrophysics Data System (ADS)
Rojas, Rafael A.; Carcaterra, Antonio
2018-07-01
In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.
Class-specific Error Bounds for Ensemble Classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prenger, R; Lemmond, T; Varshney, K
2009-10-06
The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less
On optimal strategies in event-constrained differential games
NASA Technical Reports Server (NTRS)
Heymann, M.; Rajan, N.; Ardema, M.
1985-01-01
Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.
New Bounds on the Total-Squared-Correlation of Quaternary Signature Sets and Optimal Designs
2010-03-01
2004. [8] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath,” IEEE Journal on Selected Areas in Commun., vol...vol. 51, pp. 1900-1907, May 2005. [10] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA ...Pados bounds on DS - CDMA binary signature sets,” Des., Codes Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [12] V. P. Ipatov, “On the Karystinos-Pados bounds
NASA Astrophysics Data System (ADS)
He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.
2016-12-01
Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.
What Information Theory Says about Bounded Rational Best Response
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2005-01-01
Probability Collectives (PC) provides the information-theoretic extension of conventional full-rationality game theory to bounded rational games. Here an explicit solution to the equations giving the bounded rationality equilibrium of a game is presented. Then PC is used to investigate games in which the players use bounded rational best-response strategies. Next it is shown that in the continuum-time limit, bounded rational best response games result in a variant of the replicator dynamics of evolutionary game theory. It is then shown that for team (shared-payoff) games, this variant of replicator dynamics is identical to Newton-Raphson iterative optimization of the shared utility function.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2011-01-01
This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.
New Mathematical Strategy Using Branch and Bound Method
NASA Astrophysics Data System (ADS)
Tarray, Tanveer Ahmad; Bhat, Muzafar Rasool
In this paper, the problem of optimal allocation in stratified random sampling is used in the presence of nonresponse. The problem is formulated as a nonlinear programming problem (NLPP) and is solved using Branch and Bound method. Also the results are formulated through LINGO.
New Hardness Results for Diophantine Approximation
NASA Astrophysics Data System (ADS)
Eisenbrand, Friedrich; Rothvoß, Thomas
We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
Robust Constrained Blackbox Optimization with Surrogates
2015-05-21
algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Optimal Control of Evolution Mixed Variational Inclusions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx
2013-12-15
Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.
NASA Astrophysics Data System (ADS)
Li, Qiang; Zhang, Ying; Lin, Jingran; Wu, Sissi Xiaoxiao
2017-09-01
Consider a full-duplex (FD) bidirectional secure communication system, where two communication nodes, named Alice and Bob, simultaneously transmit and receive confidential information from each other, and an eavesdropper, named Eve, overhears the transmissions. Our goal is to maximize the sum secrecy rate (SSR) of the bidirectional transmissions by optimizing the transmit covariance matrices at Alice and Bob. To tackle this SSR maximization (SSRM) problem, we develop an alternating difference-of-concave (ADC) programming approach to alternately optimize the transmit covariance matrices at Alice and Bob. We show that the ADC iteration has a semi-closed-form beamforming solution, and is guaranteed to converge to a stationary solution of the SSRM problem. Besides the SSRM design, this paper also deals with a robust SSRM transmit design under a moment-based random channel state information (CSI) model, where only some roughly estimated first and second-order statistics of Eve's CSI are available, but the exact distribution or other high-order statistics is not known. This moment-based error model is new and different from the widely used bounded-sphere error model and the Gaussian random error model. Under the consider CSI error model, the robust SSRM is formulated as an outage probability-constrained SSRM problem. By leveraging the Lagrangian duality theory and DC programming, a tractable safe solution to the robust SSRM problem is derived. The effectiveness and the robustness of the proposed designs are demonstrated through simulations.
Lower Bounds for Possible Singular Solutions for the Navier-Stokes and Euler Equations Revisited
NASA Astrophysics Data System (ADS)
Cortissoz, Jean C.; Montero, Julio A.
2018-03-01
In this paper we give optimal lower bounds for the blow-up rate of the \\dot{H}s( T^3) -norm, 1/25/2.
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Uncertainty relations as Hilbert space geometry
NASA Technical Reports Server (NTRS)
Braunstein, Samuel L.
1994-01-01
Precision measurements involve the accurate determination of parameters through repeated measurements of identically prepared experimental setups. For many parameters there is a 'natural' choice for the quantum observable which is expected to give optimal information; and from this observable one can construct an Heinsenberg uncertainty principle (HUP) bound on the precision attainable for the parameter. However, the classical statistics of multiple sampling directly gives us tools to construct bounds for the precision available for the parameters of interest (even when no obvious natural quantum observable exists, such as for phase, or time); it is found that these direct bounds are more restrictive than those of the HUP. The implication is that the natural quantum observables typically do not encode the optimal information (even for observables such as position, and momentum); we show how this can be understood simply in terms of the Hilbert space geometry. Another striking feature of these bounds to parameter uncertainty is that for a large enough number of repetitions of the measurements all V quantum states are 'minimum uncertainty' states - not just Gaussian wave-packets. Thus, these bounds tell us what precision is achievable as well as merely what is allowed.
Number-unconstrained quantum sensing
NASA Astrophysics Data System (ADS)
Mitchell, Morgan W.
2017-12-01
Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.
Colbert, Alison M; Goshin, Lorie S; Durand, Vanessa; Zoucha, Rick; Sekula, L Kathleen
2016-12-01
Health priorities of women after incarceration remain poorly understood, constraining development of interventions targeted at their health during that time. We explored the experience of health and health care after incarceration in a focused ethnography of 28 women who had been released from prison or jail within the past year and were living in community corrections facilities. The women's outlook on health was rooted in a newfound core optimism, but this was constrained by their pressing health-related issues; stress and uncertainty; and the pressures of the criminal justice system. These external forces threatened to cause collapse of women's core optimism. Findings support interventions that capitalize on women's optimism and address barriers specific to criminal justice contexts. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Y; Liang, J; Liu, W
2015-06-15
Purpose: We propose to apply a probabilistic framework, namely chanceconstrained optimization, in the intensity-modulated proton therapy (IMPT) planning subject to range and patient setup uncertainties. The purpose is to hedge against the influence of uncertainties and improve robustness of treatment plans. Methods: IMPT plans were generated for a typical prostate patient. Nine dose distributions are computed — the nominal one and one each for ±5mm setup uncertainties along three cardinal axes and for ±3.5% range uncertainty. These nine dose distributions are supplied to the solver CPLEX as chance constraints to explicitly control plan robustness under these representative uncertainty scenarios withmore » certain probability. This probability is determined by the tolerance level. We make the chance-constrained model tractable by converting it to a mixed integer optimization problem. The quality of plans derived from this method is evaluated using dose-volume histogram (DVH) indices such as tumor dose homogeneity (D5% – D95%) and coverage (D95%) and normal tissue sparing like V70 of rectum, V65, and V40 of bladder. We also compare the results from this novel method with the conventional PTV-based method to further demonstrate its effectiveness Results: Our model can yield clinically acceptable plans within 50 seconds. The chance-constrained optimization produces IMPT plans with comparable target coverage, better target dose homogeneity, and better normal tissue sparing compared to the PTV-based optimization [D95% CTV: 67.9 vs 68.7 (Gy), D5% – D95% CTV: 11.9 vs 18 (Gy), V70 rectum: 0.0 % vs 0.33%, V65 bladder: 2.17% vs 9.33%, V40 bladder: 8.83% vs 21.83%]. It also simultaneously makes the plan more robust [Width of DVH band at D50%: 2.0 vs 10.0 (Gy)]. The tolerance level may be varied to control the tradeoff between plan robustness and quality. Conclusion: The chance-constrained optimization generates superior IMPT plan compared to the PTV-based optimization with explicit control of plan robustness. NIH/NCI K25CA168984, Eagles Cancer Research Career Development, The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, Mayo ASU Seed Grant, and The Kemper Marley Foundation.« less
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Constraining unparticle physics with cosmology and astrophysics.
Davoudiasl, Hooman
2007-10-05
It has recently been suggested that a scale-invariant "unparticle" sector with a nontrivial infrared fixed point may couple to the standard model (SM) via higher-dimensional operators. The weakness of such interactions hides the unparticle phenomena at low energies. We demonstrate how cosmology and astrophysics can place significant bounds on the strength of unparticle-SM interactions. We also discuss the possibility of a having a non-negligible unparticle relic density today.
Understanding International Environmental Security: A Strategic Military Perspective
2000-11-01
remain one of the more fragile organisms on the planet, bound to a relatively constrained set of environmental conditions of landscape, temperature...series of interwoven phenomena including, but not limited to, deforestation, 30 burning of fossil fuels, and industrial pollution. Assessing each of...burning of fossil fuels is the cause. Figure 3-7 shows the trend in carbon dioxide con- centration over the past 300 years with an expanded view since
Cosmic microwave background constraints on secret interactions among sterile neutrinos
NASA Astrophysics Data System (ADS)
Forastieri, Francesco; Lattanzi, Massimiliano; Mangano, Gianpiero; Mirizzi, Alessandro; Natoli, Paolo; Saviano, Ninetta
2017-07-01
Secret contact interactions among eV sterile neutrinos, mediated by a massive gauge boson X (with MX ll MW), and characterized by a gauge coupling gX, have been proposed as a mean to reconcile cosmological observations and short-baseline laboratory anomalies. We constrain this scenario using the latest Planck data on Cosmic Microwave Background anisotropies, and measurements of baryon acoustic oscillations (BAO). We consistently include the effect of secret interactions on cosmological perturbations, namely the increased density and pressure fluctuations in the neutrino fluid, and still find a severe tension between the secret interaction framework and cosmology. In fact, taking into account neutrino scattering via secret interactions, we derive our own mass bound on sterile neutrinos and find (at 95 % CL) ms < 0.82 eV or ms < 0.29 eV from Planck alone or in combination with BAO, respectively. These limits confirm the discrepancy with the laboratory anomalies. Moreover, we constrain, in the limit of contact interaction, the effective strength GX to be < 2.8 (2.0) × 1010 GF from Planck (Planck+BAO). This result, together with the mass bound, strongly disfavours the region with MX ~ 0.1 MeV and relatively large coupling gX~ 10-1, previously indicated as a possible solution to the small scale dark matter problem.
ON THE RARITY OF X-RAY BINARIES WITH NAKED HELIUM DONORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linden, T.; Valsecchi, F.; Kalogera, V.
The paucity of known high-mass X-ray binaries (HMXBs) with naked He donor stars (hereafter He star) in the Galaxy has been noted over the years as a surprising fact, given the significant number of Galactic HMXBs containing H-rich donors, which are expected to be their progenitors. This contrast has further sharpened in light of recent observations uncovering a preponderance of HMXBs hosting loosely bound Be donors orbiting neutron stars (NSs), which would be expected to naturally evolve into He-HMXBs through dynamical mass transfer onto the NS and a common-envelope (CE) phase. Hence, reconciling the large population of Be-HMXBs with themore » observation of only one He-HMXB can help constrain the dynamics of CE physics. Here, we use detailed stellar structure and evolution models and show that binary mergers of HMXBs during CE events must be common in order to resolve the tension between these observed populations. We find that, quantitatively, this scenario remains consistent with the typically adopted energy parameterization of CE evolution, yielding expected populations which are not at odds with current observations. However, future observations which better constrain the underlying population of loosely bound O/B-NS binaries are likely to place significant constraints on the efficiency of CE ejection.« less
Sneutrinos as mixed inflaton and curvaton
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Takahashi, Tomo; Yamada, Toshifumi
2018-06-01
We investigate a scenario where the supersymmetric partners of two right-handed neutrinos (sneutrinos) work as mixed inflaton and curvaton, motivated by the fact that the curvaton contribution to scalar perturbations can reduce the tensor-to-scalar ratio r so that chaotic inflation models with a quadratic potential are made consistent with the experimental bound on r. After confirming that the scenario evades the current bounds on r and the scalar perturbation spectral index ns, we make a prediction on the local non-Gaussianity in bispectrum, fNL, and one in trispectrum, τNL. Remarkably, since the sneutrino decay widths are determined by the neutrino Dirac Yukawa coupling, which can be estimated from the measured active neutrino mass differences in the seesaw model, our scenario has a strong predictive power about local non-Gaussianities, as they heavily depend on the inflaton and curvaton decay rates. Using this fact, we can constrain the sneutrino mass from the experimental bounds on ns, r and fNL.
A robust adaptive observer for a class of singular nonlinear uncertain systems
NASA Astrophysics Data System (ADS)
Arefinia, Elaheh; Talebi, Heidar Ali; Doustmohammadi, Ali
2017-05-01
This paper proposes a robust adaptive observer for a class of singular nonlinear non-autonomous uncertain systems with unstructured unknown system and derivative matrices, and unknown bounded nonlinearities. Unlike many existing observers, no strong assumption such as Lipschitz condition is imposed on the recommended system. An augmented system is constructed, and the unknown bounds are calculated online using adaptive bounding technique. Considering the continuous nonlinear gain removes the chattering which may appear in practical applications such as analysis of electrical circuits and estimation of interaction force in beating heart robotic-assisted surgery. Moreover, a simple yet precise structure is attained which is easy to implement in many systems with significant uncertainties. The existence conditions of the standard form observer are obtained in terms of linear matrix inequality and the constrained generalised Sylvester's equations, and global stability is ensured. Finally, simulation results are obtained to evaluate the performance of the proposed estimator and demonstrate the effectiveness of the developed scheme.
Augury of darkness: the low-mass dark Z' portal
Alves, Alexandre; Arcadi, Giorgio; Mambrini, Yann; ...
2017-04-28
Dirac fermion dark matter models with heavy Z' mediators are subject to stringent constraints from spin-independent direct searches and from LHC bounds, cornering them to live near the Z' resonance. Such constraints can be relaxed, however, by turning off the vector coupling to Standard Model fermions, thus weakening direct detection bounds, or by resorting to light Z' masses, below the Z pole, to escape heavy resonance searches at the LHC. In this work we investigate both cases, as well as the applicability of our findings to Majorana dark matter. We derive collider bounds for light Z' gauge bosons using themore » CL S method, spin-dependent scattering limits, as well as the spin-independent scattering rate arising from the evolution of couplings between the energy scale of the mediator mass and the nuclear energy scale, and indirect detection limits. In conclusion, we show that such scenarios are still rather constrained by data, and that near resonance they could accommodate the gamma-ray GeV excess in the Galactic center.« less
NASA Astrophysics Data System (ADS)
Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.
2011-02-01
This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.
Formal language constrained path problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, C.; Jacob, R.; Marathe, M.
1997-07-08
In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Optimization of constrained density functional theory
NASA Astrophysics Data System (ADS)
O'Regan, David D.; Teobaldi, Gilberto
2016-07-01
Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.
Climate change in fish: effects of respiratory constraints on optimal life history and behaviour.
Holt, Rebecca E; Jørgensen, Christian
2015-02-01
The difference between maximum metabolic rate and standard metabolic rate is referred to as aerobic scope, and because it constrains performance it is suggested to constitute a key limiting process prescribing how fish may cope with or adapt to climate warming. We use an evolutionary bioenergetics model for Atlantic cod (Gadus morhua) to predict optimal life histories and behaviours at different temperatures. The model assumes common trade-offs and predicts that optimal temperatures for growth and fitness lie below that for aerobic scope; aerobic scope is thus a poor predictor of fitness at high temperatures. Initially, warming expands aerobic scope, allowing for faster growth and increased reproduction. Beyond the optimal temperature for fitness, increased metabolic requirements intensify foraging and reduce survival; oxygen budgeting conflicts thus constrain successful completion of the life cycle. The model illustrates how physiological adaptations are part of a suite of traits that have coevolved. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1984-01-01
Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
SU-E-I-23: A General KV Constrained Optimization of CNR for CT Abdominal Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, V; Zhang, J
Purpose: While Tube current modulation has been well accepted for CT dose reduction, kV adjusting in clinical settings is still at its early stage. This is mainly due to the limited kV options of most current CT scanners. kV adjusting can potentially reduce radiation dose and optimize image quality. This study is to optimize CT abdomen imaging acquisition based on the assumption of a continuous kV, with the goal to provide the best contrast to noise ratio (CNR). Methods: For a given dose (CTDIvol) level, the CNRs at different kV and pitches were measured with an ACR GAMMEX phantom. Themore » phantom was scanned in a Siemens Sensation 64 scanner and a GE VCT 64 scanner. A constrained mathematical optimization was used to find the kV which led to the highest CNR for the anatomy and pitch setting. Parametric equations were obtained from polynomial fitting of plots of kVs vs CNRs. A suitable constraint region for optimization was chosen. Subsequent optimization yielded a peak CNR at a particular kV for different collimations and pitch setting. Results: The constrained mathematical optimization approach yields kV of 114.83 and 113.46, with CNRs of 1.27 and 1.11 at the pitch of 1.2 and 1.4, respectively, for the Siemens Sensation 64 scanner with the collimation of 32 x 0.625mm. An optimized kV of 134.25 and 1.51 CNR is obtained for a GE VCT 64 slice scanner with a collimation of 32 x 0.625mm and a pitch of 0.969. At 0.516 pitch and 32 x 0.625 mm an optimized kV of 133.75 and a CNR of 1.14 was found for the GE VCT 64 slice scanner. Conclusion: CNR in CT image acquisition can be further optimized with a continuous kV option instead of current discrete or fixed kV settings. A continuous kV option is a key for individualized CT protocols.« less
Graviton propagation within the context of the D-material universe.
Elghozi, Thomas; Mavromatos, Nick E; Sakellariadou, Mairi
2017-01-01
Motivated by the recent breakthrough of the detection of Gravitational Waves (GW) from coalescent black holes by the aLIGO interferometers, we study the propagation of GW in the D-material universe , which we have recently shown to be compatible with large-scale structure and inflationary phenomenology. The medium of D-particles induces an effective mass for the graviton, as a consequence of the formation of recoil-velocity field condensates due to the underlying Born-Infeld dynamics. There is a competing effect, due to a super-luminal refractive index, as a result of the gravitational energy of D-particles acting as a dark-matter component, with which propagating gravitons interact. We examine conditions for the condensate under which the latter effect is sub-leading. We argue that if quantum fluctuations of the recoil velocity are relatively strong, which can happen in the current era of the universe, then the condensate, and hence the induced mass of the graviton, can be several orders of magnitude larger than the magnitude of the cosmological constant today. Hence, we constrain the graviton mass using aLIGO and pulsar-timing observations (which give the most stringent bounds at present). In such a sub-luminal graviton case, there is also a gravitational Cherenkov effect for ordinary high-energy cosmic matter, which is further constrained by means of ultra-high-energy cosmic ray observations. Assuming cosmic rays of extragalactic origin, the bounds on the quantum condensate strength, based on the gravitational Cherenkov effect, are of the same order as those from aLIGO measurements, in contrast to the case where a galactic origin of the cosmic rays is assumed, in which case the corresponding bounds are much weaker.
Discovery of wide low and very low-mass binary systems using Virtual Observatory tools
NASA Astrophysics Data System (ADS)
Gálvez-Ortiz, M. C.; Solano, E.; Lodieu, N.; Aberasturi, M.
2017-04-01
The frequency of multiple systems and their properties are key constraints of stellar formation and evolution. Formation mechanisms of very low-mass (VLM) objects are still under considerable debate, and an accurate assessment of their multiplicity and orbital properties is essential for constraining current theoretical models. Taking advantage of the virtual observatory capabilities, we looked for comoving low and VLM binary (or multiple) systems using the Large Area Survey of the UKIDSS LAS DR10, SDSS DR9 and the 2MASS Catalogues. Other catalogues (WISE, GLIMPSE, SuperCosmos, etc.) were used to derive the physical parameters of the systems. We report the identification of 36 low and VLM (˜M0-L0 spectral types) candidates to binary/multiple system (separations between 200 and 92 000 au), whose physical association is confirmed through common proper motion, distance and low probability of chance alignment. This new system list notably increases the previous sampling in their mass-separation parameter space (˜100). We have also found 50 low-mass objects that we can classify as ˜L0-T2 according to their photometric information. Only one of these objects presents a common proper motion high-mass companion. Although we could not constrain the age of the majority of the candidates, probably most of them are still bound except four that may be under disruption processes. We suggest that our sample could be divided in two populations: one tightly bound wide VLM systems that are expected to last more than 10 Gyr, and other formed by weak bound wide VLM systems that will dissipate within a few Gyr.
Chakravorty, Dhruva K.; Wang, Bing; Lee, Chul Won; Guerra, Alfredo J.; Giedroc, David P.; Merz, Kenneth M.
2013-01-01
Correctly calculating the structure of metal coordination sites in a protein during the process of nuclear magnetic resonance (NMR) structure determination and refinement continues to be a challenging task. In this study, we present an accurate and convenient means by which to include metal ions in the NMR structure determination process using molecular dynamics (MD) constrained by NMR-derived data to obtain a realistic and physically viable description of the metal binding site(s). This method provides the framework to accurately portray the metal ions and its binding residues in a pseudo-bond or dummy-cation like approach, and is validated by quantum mechanical/molecular mechanical (QM/MM) MD calculations constrained by NMR-derived data. To illustrate this approach, we refine the zinc coordination complex structure of the zinc sensing transcriptional repressor protein Staphylococcus aureus CzrA, generating over 130 ns of MD and QM/MM MD NMR-data compliant sampling. In addition to refining the first coordination shell structure of the Zn(II) ion, this protocol benefits from being performed in a periodically replicated solvation environment including long-range electrostatics. We determine that unrestrained (not based on NMR data) MD simulations correlated to the NMR data in a time-averaged ensemble. The accurate solution structure ensemble of the metal-bound protein accurately describes the role of conformational dynamics in allosteric regulation of DNA binding by zinc and serves to validate our previous unrestrained MD simulations of CzrA. This methodology has potentially broad applicability in the structure determination of metal ion bound proteins, protein folding and metal template protein-design studies. PMID:23609042
Rocha, Alexandre B; de Moura, Carlos E V
2011-12-14
Potential energy curves for inner-shell states of nitrogen and carbon dioxide molecules are calculated by inner-shell complete active space self-consistent field (CASSCF) method, which is a protocol, recently proposed, to obtain specifically converged inner-shell states at multiconfigurational level. This is possible since the collapse of the wave function to a low-lying state is avoided by a sequence of constrained optimization in the orbital mixing step. The problem of localization of K-shell states is revisited by calculating their energies at CASSCF level based on both localized and delocalized orbitals. The localized basis presents the best results at this level of calculation. Transition energies are also calculated by perturbation theory, by taking the above mentioned MCSCF function as zeroth order wave function. Values for transition energy are in fairly good agreement with experimental ones. Bond dissociation energies for N(2) are considerably high, which means that these states are strongly bound. Potential curves along ground state normal modes of CO(2) indicate the occurrence of Renner-Teller effect in inner-shell states. © 2011 American Institute of Physics
RAZOR: A Compression and Classification Solution for the Internet of Things
Danieletto, Matteo; Bui, Nicola; Zorzi, Michele
2014-01-01
The Internet of Things is expected to increase the amount of data produced and exchanged in the network, due to the huge number of smart objects that will interact with one another. The related information management and transmission costs are increasing and becoming an almost unbearable burden, due to the unprecedented number of data sources and the intrinsic vastness and variety of the datasets. In this paper, we propose RAZOR, a novel lightweight algorithm for data compression and classification, which is expected to alleviate both aspects by leveraging the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. In particular, RAZOR leverages the concept of motifs, recurrent features used for signal categorization, in order to compress data streams: in such a way, it is possible to achieve compression levels of up to an order of magnitude, while maintaining the signal distortion within acceptable bounds and allowing for simple lightweight distributed classification. In addition, RAZOR is designed to keep the computational complexity low, in order to allow its implementation in the most constrained devices. The paper provides results about the algorithm configuration and a performance comparison against state-of-the-art signal processing techniques. PMID:24451454
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
King, Matthew D; Buchanan, William D; Korter, Timothy M
2011-03-14
The effects of applying an empirical dispersion correction to solid-state density functional theory methods were evaluated in the simulation of the crystal structure and low-frequency (10 to 90 cm(-1)) terahertz spectrum of the non-steroidal anti-inflammatory drug, naproxen. The naproxen molecular crystal is bound largely by weak London force interactions, as well as by more prominent interactions such as hydrogen bonding, and thus serves as a good model for the assessment of the pair-wise dispersion correction term in systems influenced by intermolecular interactions of various strengths. Modifications to the dispersion parameters were tested in both fully optimized unit cell dimensions and those determined by X-ray crystallography, with subsequent simulations of the THz spectrum being performed. Use of the unmodified PBE density functional leads to an unrealistic expansion of the unit cell volume and the poor representation of the THz spectrum. Inclusion of a modified dispersion correction enabled a high-quality simulation of the THz spectrum and crystal structure of naproxen to be achieved without the need for artificially constraining the unit cell dimensions.
Reusable Launch Vehicle Control In Multiple Time Scale Sliding Modes
NASA Technical Reports Server (NTRS)
Shtessel, Yuri; Hall, Charles; Jackson, Mark
2000-01-01
A reusable launch vehicle control problem during ascent is addressed via multiple-time scaled continuous sliding mode control. The proposed sliding mode controller utilizes a two-loop structure and provides robust, de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of bounded external disturbances and plant uncertainties. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues placement. Overall stability of a two-loop control system is addressed. An optimal control allocation algorithm is designed that allocates torque commands into end-effector deflection commands, which are executed by the actuators. The dual-time scale sliding mode controller was designed for the X-33 technology demonstration sub-orbital launch vehicle in the launch mode. Simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in presence of external disturbances and vehicle inertia uncertainties. This is a significant advancement in performance over that achieved with linear, gain scheduled control systems currently being used for launch vehicles.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Grussu, Francesco; Ianus, Andrada; Schneider, Torben; Prados, Ferran; Fairney, James; Ourselin, Sebastien; Alexander, Daniel C.; Cercignani, Mara; Gandini Wheeler‐Kingshott, Claudia A.M.; Samson, Rebecca S.
2017-01-01
Purpose To develop a framework to fully characterize quantitative magnetization transfer indices in the human cervical cord in vivo within a clinically feasible time. Methods A dedicated spinal cord imaging protocol for quantitative magnetization transfer was developed using a reduced field‐of‐view approach with echo planar imaging (EPI) readout. Sequence parameters were optimized based in the Cramer‐Rao‐lower bound. Quantitative model parameters (i.e., bound pool fraction, free and bound pool transverse relaxation times [ T2F, T2B], and forward exchange rate [k FB]) were estimated implementing a numerical model capable of dealing with the novelties of the sequence adopted. The framework was tested on five healthy subjects. Results Cramer‐Rao‐lower bound minimization produces optimal sampling schemes without requiring the establishment of a steady‐state MT effect. The proposed framework allows quantitative voxel‐wise estimation of model parameters at the resolution typically used for spinal cord imaging (i.e. 0.75 × 0.75 × 5 mm3), with a protocol duration of ∼35 min. Quantitative magnetization transfer parametric maps agree with literature values. Whole‐cord mean values are: bound pool fraction = 0.11(±0.01), T2F = 46.5(±1.6) ms, T2B = 11.0(±0.2) µs, and k FB = 1.95(±0.06) Hz. Protocol optimization has a beneficial effect on reproducibility, especially for T2B and k FB. Conclusion The framework developed enables robust characterization of spinal cord microstructure in vivo using qMT. Magn Reson Med 79:2576–2588, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28921614
Precision of proportion estimation with binary compressed Raman spectrum.
Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric
2018-01-01
The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin
2018-05-01
The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
2016-09-09
prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially
A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe
NASA Astrophysics Data System (ADS)
Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy
2017-08-01
We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Unitarity and the three flavor neutrino mixing matrix
Parke, Stephen; Ross-Lonergan, Mark
2016-06-14
Unitarity is a fundamental property of any theory required to ensure we work in a theoretically consistent framework. In comparison with the quark sector, experimental tests of unitarity for the 3x3 neutrino mixing matrix are considerably weaker. It must be remembered that the vast majority of our information on the neutrino mixing angles originates from v - e and v μ disappearance experiments, with the assumption of unitarity being invoked to constrain the remaining elements. New physics can invalidate this assumption for the 3x3 subset and thus modify our precision measurements. We also perform a reanalysis to see how globalmore » knowledge is altered when one refits oscillation results without assuming unitarity, and present 3σ ranges for allowed U PMNS elements consistent with all observed phenomena. We calculate the bounds on the closure of the six neutrino unitarity triangles, with the closure of the v - e and v μ triangle being constrained to be ≤0.03, while the remaining triangles are significantly less constrained to be ≤ 0.1 - 0.2. Similarly for the row and column normalization, we find their deviation from unity is constrained to be ≤ 0.2 - 0.4, for four out of six such normalizations, while for the v μ and v e row normalization the deviations are constrained to be ≤0.07, all at the 3σCL. Additionally, we emphasize that there is significant room for new low energy physics, especially in the v τ sector which very few current experiments constrain directly.« less
Benchmarking optimization software with COPS 3.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E. D.; More, J. J.; Munson, T. S.
2004-05-24
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.
Zhang, Yushan; Hu, Guiwu
2015-01-01
Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.
Chang, Y K; Lim, H C
1989-08-20
A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.
A subgradient approach for constrained binary optimization via quantum adiabatic evolution
NASA Astrophysics Data System (ADS)
Karimi, Sahar; Ronagh, Pooya
2017-08-01
Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.
Rapid Slewing of Flexible Space Structures
2015-09-01
axis gimbal with elastic joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are...the effects of the nonlinearities so the vibrational motion can be constrained for a time-optimal slew. It is shown that by constructing an...joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are properly constrained, thus
Backes, Bradley J; Longenecker, Kenton; Hamilton, Gregory L; Stewart, Kent; Lai, Chunqiu; Kopecka, Hana; von Geldern, Thomas W; Madar, David J; Pei, Zhonghua; Lubben, Thomas H; Zinker, Bradley A; Tian, Zhenping; Ballaron, Stephen J; Stashko, Michael A; Mika, Amanda K; Beno, David W A; Kempf-Grote, Anita J; Black-Schaefer, Candace; Sham, Hing L; Trevillyan, James M
2007-04-01
A novel series of pyrrolidine-constrained phenethylamines were developed as dipeptidyl peptidase IV (DPP4) inhibitors for the treatment of type 2 diabetes. The cyclohexene ring of lead-like screening hit 5 was replaced with a pyrrolidine to enable parallel chemistry, and protein co-crystal structural data guided the optimization of N-substituents. Employing this strategy, a >400x improvement in potency over the initial hit was realized in rapid fashion. Optimized compounds are potent and selective inhibitors with excellent pharmacokinetic profiles. Compound 30 was efficacious in vivo, lowering blood glucose in ZDF rats that were allowed to feed freely on a mixed meal.
Statistical mechanics of budget-constrained auctions
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
NASA Astrophysics Data System (ADS)
Bie, Lidong; Ryder, Isabelle; Nippress, Stuart E. J.; Bürgmann, Roland
2014-02-01
The 2008 Mw 6.3 Damxung earthquake on the Tibetan Plateau is investigated to (i) derive a coseismic slip model in a layered elastic Earth; (ii) reveal the relationship between coseismic slip, afterslip and aftershocks and (iii) place a lower bound on mid/lower crustal viscosity. The fault parameters and coseismic slip model were derived by inversion of Envisat InSAR data. We developed an improved non-linear inversion scheme to find an optimal rupture geometry and slip distribution on a fault in a layered elastic crust. Although the InSAR data for this event cannot distinguish between homogeneous and layered crustal models, the maximum slip of the latter model is smaller and deeper, while the moment release calculated from both models are similar. A ˜1.6 yr post-seismic deformation time-series starting 20 d after the main shock reveals localized deformation at the southern part of the fault. Inversions for afterslip indicate three localized slip patches, and the cumulative afterslip moment after 615 d is at least ˜11 per cent of the coseismic moment. The afterslip patches are distributed at different depths along the fault, showing no obvious systematic depth-dependence. The deeper of the three patches, however, shows a slight tendency to migrate to greater depth over time. No linear correlation is found for the temporal evolution of afterslip and aftershocks. Finally, modelling of viscoelastic relaxation in a Maxwell half-space yields a lower bound of 1 × 1018 Pa s on the viscosity of the mid/lower crust. This is consistent with viscosity estimates in other studies of post-seismic deformation across the Tibetan Plateau.
Optimization in modeling the ribs-bounded contour from computer tomography scan
NASA Astrophysics Data System (ADS)
Bilinskas, M. J.; Dzemyda, G.
2016-10-01
In this paper a method for analyzing transversal plane images from computer tomography scans is presented. A mathematical model that describes the ribs-bounded contour was created and the problem of approximation is solved by finding out the optimal parameters of the model in the least-squares sense. Such model would be useful in registration of images independently on the patient position on the bed and on the radio-contrast agent injection. We consider the slices, where ribs are visible, because many important internal organs are located here: liver, heart, stomach, pancreas, lung, etc.
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
2013-08-14
Connectivity Graph; Graph Search; Bounded Disturbances; Linear Time-Varying (LTV); Clohessy - Wiltshire -Hill (CWH) 16. SECURITY CLASSIFICATION OF: 17...the linearization of the relative motion model given by the Hill- Clohessy - Wiltshire (CWH) equations is used [14]. A. Nonlinear equations of motion...equations can be used to describe the motion of the debris. B. Linearized HCW equations in discrete-time For δr << R, the linearized Hill- Clohessy
A 750 GeV portal: LHC phenomenology and dark matter candidates
D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo
2016-05-16
We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less
A 750 GeV portal: LHC phenomenology and dark matter candidates
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo
We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less
Truncated Gaussians as tolerance sets
NASA Technical Reports Server (NTRS)
Cozman, Fabio; Krotkov, Eric
1994-01-01
This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.
Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.
1987-06-01
approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con
TTSA: An Effective Scheduling Approach for Delay Bounded Tasks in Hybrid Clouds.
Yuan, Haitao; Bi, Jing; Tan, Wei; Zhou, MengChu; Li, Bo Hu; Li, Jianqiang
2017-11-01
The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
A Polyhedral Outer-approximation, Dynamic-discretization optimization solver, 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Rusell; Nagarajan, Harsha; Sundar, Kaarthik
2017-09-25
In this software, we implement an adaptive, multivariate partitioning algorithm for solving mixed-integer nonlinear programs (MINLP) to global optimality. The algorithm combines ideas that exploit the structure of convex relaxations to MINLPs and bound tightening procedures
Quantum key distribution with finite resources: Secret key rates via Renyi entropies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abruzzo, Silvestre; Kampermann, Hermann; Mertz, Markus
A realistic quantum key distribution (QKD) protocol necessarily deals with finite resources, such as the number of signals exchanged by the two parties. We derive a bound on the secret key rate which is expressed as an optimization problem over Renyi entropies. Under the assumption of collective attacks by an eavesdropper, a computable estimate of our bound for the six-state protocol is provided. This bound leads to improved key rates in comparison to previous results.
Application’s Method of Quadratic Programming for Optimization of Portfolio Selection
NASA Astrophysics Data System (ADS)
Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro
Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Stabilizing photoassociated Cs2 molecules by optimal control
NASA Astrophysics Data System (ADS)
Zhang, Wei; Xie, Ting; Huang, Yin; Wang, Gao-Ren; Cong, Shu-Lin
2013-01-01
We demonstrate theoretically that photoassociated molecules can be stabilized to deeply bound states. This process is achieved by transferring the population from the outer well to the inner well using the optimal control theory, the Cs2 molecule is taken as an example. Numerical calculations show that weakly bound molecules formed in the outer well by a pump pulse can be compressed to the inner well via a vibrational level of the ground electronic state as an intermediary by an additionally optimized laser pulse. The positively chirped pulse can enhance the population of the target state. With a transform-limited dump pulse, nearly all the photoassociated molecules in the inner well of the excited electronic state can be transferred to the deeply vibrational level of the ground electronic state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
SU-E-T-551: Monitor Unit Optimization in Stereotactic Body Radiation Therapy for Stage I Lung Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, B-T; Lu, J-Y
2015-06-15
Purpose: The study aims to reduce the monitor units (MUs) in the stereotactic body radiation therapy (SBRT) treatment for lung cancer by adjusting the optimizing parameters. Methods: Fourteen patients suffered from stage I Non-Small Cell Lung Cancer (NSCLC) were enrolled. Three groups of parameters were adjusted to investigate their effects on MU numbers and organs at risk (OARs) sparing: (1) the upper objective of planning target volume (UOPTV); (2) strength setting in the MU constraining objective; (3) max MU setting in the MU constraining objective. Results: We found that the parameters in the optimizer influenced the MU numbers in amore » priority, strength and max MU dependent manner. MU numbers showed a decreasing trend with the UOPTV increasing. MU numbers with low, medium and high priority for the UOPTV were 428±54, 312±48 and 258±31 MU/Gy, respectively. High priority for UOPTV also spared the heart, cord and lung while maintaining comparable PTV coverage than the low and medium priority group. It was observed that MU numbers tended to decrease with the strength increasing and max MU setting decreasing. With maximum strength, the MU numbers reached its minimum while maintaining comparable or improved dose to the normal tissues. It was also found that the MU numbers continued to decline at 85% and 75% max MU setting but no longer to decrease at 50% and 25%. Combined with high priority for UOPTV and MU constraining objectives, the MU numbers can be decreased as low as 223±26 MU/Gy. Conclusion:: The priority of UOPTV, MU constraining objective in the optimizer impact on the MU numbers in SBRT treatment for lung cancer. Giving high priority to the UOPTV, setting the strength to maximum value and the max MU to 50% in the MU objective achieves the lowest MU numbers while maintaining comparable or improved OAR sparing.« less
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.
Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas
2014-01-01
The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel
2005-01-01
The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
Robert G. Haight; J. Douglas Brodie; Darius M. Adams
1985-01-01
The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell Henry
2014-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2015-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Limitations of the background field method applied to Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Nobili, Camilla; Otto, Felix
2017-09-01
We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Transcription Factors Bind Thousands of Active and InactiveRegions in the Drosophila Blastoderm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiao-Yong; MacArthur, Stewart; Bourgon, Richard
2008-01-10
Identifying the genomic regions bound by sequence-specific regulatory factors is central both to deciphering the complex DNA cis-regulatory code that controls transcription in metazoans and to determining the range of genes that shape animal morphogenesis. Here, we use whole-genome tiling arrays to map sequences bound in Drosophila melanogaster embryos by the six maternal and gap transcription factors that initiate anterior-posterior patterning. We find that these sequence-specific DNA binding proteins bind with quantitatively different specificities to highly overlapping sets of several thousand genomic regions in blastoderm embryos. Specific high- and moderate-affinity in vitro recognition sequences for each factor are enriched inmore » bound regions. This enrichment, however, is not sufficient to explain the pattern of binding in vivo and varies in a context-dependent manner, demonstrating that higher-order rules must govern targeting of transcription factors. The more highly bound regions include all of the over forty well-characterized enhancers known to respond to these factors as well as several hundred putative new cis-regulatory modules clustered near developmental regulators and other genes with patterned expression at this stage of embryogenesis. The new targets include most of the microRNAs (miRNAs) transcribed in the blastoderm, as well as all major zygotically transcribed dorsal-ventral patterning genes, whose expression we show to be quantitatively modulated by anterior-posterior factors. In addition to these highly bound regions, there are several thousand regions that are reproducibly bound at lower levels. However, these poorly bound regions are, collectively, far more distant from genes transcribed in the blastoderm than highly bound regions; are preferentially found in protein-coding sequences; and are less conserved than highly bound regions. Together these observations suggest that many of these poorly-bound regions are not involved in early-embryonic transcriptional regulation, and a significant proportion may be nonfunctional. Surprisingly, for five of the six factors, their recognition sites are not unambiguously more constrained evolutionarily than the immediate flanking DNA, even in more highly bound and presumably functional regions, indicating that comparative DNA sequence analysis is limited in its ability to identify functional transcription factor targets.« less
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
ERIC Educational Resources Information Center
Brusco, Michael J.
2002-01-01
Developed a branch-and-bound algorithm that can be used to seriate a symmetric dissimilarity matrix by identifying a reordering of rows and columns of the matrix optimizing an anti-Robinson criterion. Computational results suggest that with respect to computational efficiency, the approach is generally competitive with dynamic programming. (SLD)
KOI-3278: a self-lensing binary star system.
Kruse, Ethan; Agol, Eric
2014-04-18
Over 40% of Sun-like stars are bound in binary or multistar systems. Stellar remnants in edge-on binary systems can gravitationally magnify their companions, as predicted 40 years ago. By using data from the Kepler spacecraft, we report the detection of such a "self-lensing" system, in which a 5-hour pulse of 0.1% amplitude occurs every orbital period. The white dwarf stellar remnant and its Sun-like companion orbit one another every 88.18 days, a long period for a white dwarf-eclipsing binary. By modeling the pulse as gravitational magnification (microlensing) along with Kepler's laws and stellar models, we constrain the mass of the white dwarf to be ~63% of the mass of our Sun. Further study of this system, and any others discovered like it, will help to constrain the physics of white dwarfs and binary star evolution.
Sub-TeV quintuplet minimal dark matter with left-right symmetry
NASA Astrophysics Data System (ADS)
Agarwalla, Sanjib Kumar; Ghosh, Kirtiman; Patra, Ayon
2018-05-01
A detailed study of a fermionic quintuplet dark matter in a left-right symmetric scenario is performed in this article. The minimal quintuplet dark matter model is highly constrained from the WMAP dark matter relic density (RD) data. To elevate this constraint, an extra singlet scalar is introduced. It introduces a host of new annihilation and co-annihilation channels for the dark matter, allowing even sub-TeV masses. The phenomenology of this singlet scalar is studied in detail in the context of the Large Hadron Collider (LHC) experiment. The production and decay of this singlet scalar at the LHC give rise to interesting resonant di-Higgs or diphoton final states. We also constrain the RD allowed parameter space of this model in light of the ATLAS bounds on the resonant di-Higgs and diphoton cross-sections.
Sensitivity to neutrino decay with atmospheric neutrinos at the INO-ICAL detector
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Goswami, Srubabati; Gupta, Chandan; Lakshmi, S. M.; Thakore, Tarak
2018-02-01
Sensitivity of the magnetized Iron Calorimeter (ICAL) detector at the proposed India-based Neutrino Observatory (INO) to invisible decay of the mass eigenstate ν3 using atmospheric neutrinos is explored. A full three-generation analysis including Earth matter effects is performed in a framework with both decay and oscillations. The wide energy range and baselines offered by atmospheric neutrinos are shown to be excellent for constraining the ν3 lifetime. We find that with an exposure of 500 kton -yr the ICAL atmospheric experiment could constrain the ν3 lifetime to τ3/m3>1.51 ×10-10 s /eV at the 90% C.L. This is 2 orders of magnitude tighter than the bound from MINOS. The effect of invisible decay on the precision measurement of θ23 and |Δ m322| is also studied.