Science.gov

Sample records for convex optimization problem

  1. {epsilon}-optimality conditions for weakly convex problems

    SciTech Connect

    Pappalardo, M.

    1994-12-31

    There are several generalizations concerning the concept of convexity both for sets and for functions. Weak convexity, among these, has showed many possibilities of applications and many theoretical properties. It has, in fact, been applied in several fields of mathematics: see for example geometry and optimization. We want to analyze this generalization of the concept of convexity via the image-space approach. This kind of approach has showed its utility in many fields of optimization. In particular, we introduce a new concept of {open_quotes}image{close_quotes} based on a suitable relaxation or reduction (lower and upper) of the image itself. Moreover we analyze the main properties of this concept and we show how to utilize it in the study of weakly convex constrained extremum problems in order to obtain {epsilon}-optimality conditions. The paper is divided in three parts: in the first we introduce the concept of perturbed image and we investigate the main theoretical properties. In the second we state {epsilon}-optimality conditions for weakly convex constrained extremum problems. In the third one we study relationships between this type of image and the augmented lagrangian.

  2. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm.

    PubMed

    Sidky, Emil Y; Jørgensen, Jakob H; Pan, Xiaochuan

    2012-05-21

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1-26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity x-ray illumination is presented.

  3. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  4. Gradient vs. approximation design optimization techniques in low-dimensional convex problems

    NASA Astrophysics Data System (ADS)

    Fedorik, Filip

    2013-10-01

    Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.

  5. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    PubMed

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  6. The optimal solution of a non-convex state-dependent LQR problem and its applications.

    PubMed

    Xu, Xudan; Zhu, J Jim; Zhang, Ping

    2014-01-01

    This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix [Formula: see text] in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting [Formula: see text]. It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting [Formula: see text], in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions.

  7. The Optimal Solution of a Non-Convex State-Dependent LQR Problem and Its Applications

    PubMed Central

    Xu, Xudan; Zhu, J. Jim; Zhang, Ping

    2014-01-01

    This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting . It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting , in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions. PMID:24747417

  8. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  9. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  10. Convexity.

    ERIC Educational Resources Information Center

    Berger, Marcel

    1990-01-01

    Discussed are the idea, examples, problems, and applications of convexity. Topics include historical examples, definitions, the John-Loewner ellipsoid, convex functions, polytopes, the algebraic operation of duality and addition, and topology of convex bodies. (KR)

  11. Convexity.

    ERIC Educational Resources Information Center

    Berger, Marcel

    1990-01-01

    Discussed are the idea, examples, problems, and applications of convexity. Topics include historical examples, definitions, the John-Loewner ellipsoid, convex functions, polytopes, the algebraic operation of duality and addition, and topology of convex bodies. (KR)

  12. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  13. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  14. Sparse recovery via convex optimization

    NASA Astrophysics Data System (ADS)

    Randall, Paige Alicia

    This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization

  15. Directional Convexity and Finite Optimality Conditions.

    DTIC Science & Technology

    1984-03-01

    system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United

  16. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  17. First and second order convex approximation strategies in structural optimization

    NASA Technical Reports Server (NTRS)

    Fleury, C.

    1989-01-01

    In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.

  18. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  19. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  20. Approximate proximal point methods for convex programming problems

    SciTech Connect

    Eggermont, P.

    1994-12-31

    We study proximal point methods for the finite dimensional convex programming problem minimize f(x) such that x {element_of} C, where f : dom f {contained_in} RIR is a proper convex function and C {contained_in} R is a closed convex set.

  1. Hausdorff methods for approximating the convex Edgeworth-Pareto hull in integer problems with monotone objectives

    NASA Astrophysics Data System (ADS)

    Pospelov, A. I.

    2016-08-01

    Adaptive methods for the polyhedral approximation of the convex Edgeworth-Pareto hull in multiobjective monotone integer optimization problems are proposed and studied. For these methods, theoretical convergence rate estimates with respect to the number of vertices are obtained. The estimates coincide in order with those for filling and augmentation H-methods intended for the approximation of nonsmooth convex compact bodies.

  2. Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization

    NASA Technical Reports Server (NTRS)

    Polyak, Roman; Teboulle, Marc

    1997-01-01

    The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth scaling function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear resealing algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.

  3. Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization

    NASA Technical Reports Server (NTRS)

    Polyak, Roman; Teboulle, Marc

    1997-01-01

    The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth scaling function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear resealing algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.

  4. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.

  5. From a Nonlinear, Nonconvex Variational Problem to a Linear, Convex Formulation

    SciTech Connect

    Egozcue, J. Meziat, R. Pedregal, P.

    2002-12-19

    We propose a general approach to deal with nonlinear, nonconvex variational problems based on a reformulation of the problem resulting in an optimization problem with linear cost functional and convex constraints. As a first step we explicitly explore these ideas to some one-dimensional variational problems and obtain specific conclusions of an analytical and numerical nature.

  6. Optimality Certificates for Convex Minimization and Helly Numbers

    DTIC Science & Technology

    2016-10-20

    duality theory for general convex mixed-integer problems. The approach taken by Moran et al. was essentially algebraic , drawing on the theory of...Mathematical Programming, 124:143–174, 2010. [8] Cor A J Hurkens. Blowing up convex sets in the plane. Linear Algebra and its Applications, 134:121–128

  7. A Cutting Plane Algorithm for Problems Containing Convex and Reverse Convex Constraints,

    DTIC Science & Technology

    The method of cut generation used in this paper was initially described by Tui for minimizing a concave function subject to linear constraints. Balas...Glover, and Young have recognized the applicability of such ’convexity cuts ’ to integer problems. This paper shows that these cuts can be used in the solution of an even larger class of nonconvex problems.

  8. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Statistical Mechanics of Optimal Convex Inference in High Dimensions

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Ganguli, Surya

    2016-07-01

    A fundamental problem in modern high-dimensional data analysis involves efficiently inferring a set of P unknown model parameters governing the relationship between the inputs and outputs of N noisy measurements. Various methods have been proposed to regress the outputs against the inputs to recover the P parameters. What are fundamental limits on the accuracy of regression, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we optimally combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density α =(N /P )→∞ . However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite α . We employ replica theory to answer these questions for a class of inference algorithms, known in the statistics literature as M-estimators. These algorithms attempt to recover the P model parameters by solving an optimization problem involving minimizing the sum of a loss function that penalizes deviations between the data and model predictions, and a regularizer that leverages prior information about model parameters. Widely cherished algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference arise as special cases of M-estimators. Our analysis uncovers fundamental limits on the inference accuracy of a subclass of M-estimators corresponding to computationally tractable convex optimization problems. These limits generalize classical statistical theorems like the Cramer-Rao bound to the high-dimensional setting with prior information. We further discover the optimal M-estimator for log-concave signal and noise distributions; we demonstrate that it can achieve our high-dimensional limits on inference accuracy, while ML and MAP cannot. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than

  10. A tractable approximation of non-convex chance constrained optimization with non-Gaussian uncertainties

    NASA Astrophysics Data System (ADS)

    Geletu, Abebe; Klöppel, Michael; Hoffmann, Armin; Li, Pu

    2015-04-01

    Chance constrained optimization problems in engineering applications possess highly nonlinear process models and non-convex structures. As a result, solving a nonlinear non-convex chance constrained optimization (CCOPT) problem remains as a challenging task. The major difficulty lies in the evaluation of probability values and gradients of inequality constraints which are nonlinear functions of stochastic variables. This article proposes a novel analytic approximation to improve the tractability of smooth non-convex chance constraints. The approximation uses a smooth parametric function to define a sequence of smooth nonlinear programs (NLPs). The sequence of optimal solutions of these NLPs remains always feasible and converges to the solution set of the CCOPT problem. Furthermore, Karush-Kuhn-Tucker (KKT) points of the approximating problems converge to a subset of KKT points of the CCOPT problem. Another feature of this approach is that it can handle uncertainties with both Gaussian and/or non-Gaussian distributions.

  11. A high-performance feedback neural network for solving convex nonlinear programming problems.

    PubMed

    Leung, Yee; Chen, Kai-Zhou; Gao, Xing-Bao

    2003-01-01

    Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.

  12. Optimization-based mesh correction with volume and convexity constraints

    SciTech Connect

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.

  13. Estimation of Saxophone Control Parameters by Convex Optimization

    PubMed Central

    Wang, Cheng-i; Smyth, Tamara; Lipton, Zachary C.

    2015-01-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value. PMID:27754493

  14. The problem of convexity of Chebyshev sets

    NASA Astrophysics Data System (ADS)

    Balaganskii, V. S.; Vlasov, L. P.

    1996-12-01

    Contents Introduction §1. Definitions and notation §2. Reference theorems §3. Some results Chapter I. Characterization of Banach spaces by means of the relations between approximation properties of sets §1. Existence, uniqueness §2. Prom approximate compactness to 'sun'-property §3. From 'sun'-property to approximate compactness §4. Differentiability in the direction of the gradient is sufficient for Fréchet and Gâteaux differentiability §5. Sets with convex complement Chapter II. The structure of Chebyshev and related sets §1. The isolated point method §2. Restrictions of the type \\vert\\overline{W}\\vert < \\vert X\\vert §3. The case where M is locally compact §4. The case where W lies in a hyperplane §5. Other cases Chapter III. Selected results §1. Some applications of the theory of monotone operators §2. A non-convex Chebyshev set in pre-Hilbert space §3. The example of Klee (discrete Chebyshev set) §4. A survey of some other results Conclusion Bibliography

  15. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  16. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed

  17. Convergence of the gradient projection method and Newton's method as applied to optimization problems constrained by intersection of a spherical surface and a convex closed set

    NASA Astrophysics Data System (ADS)

    Chernyaev, Yu. A.

    2016-10-01

    The gradient projection method and Newton's method are generalized to the case of nonconvex constraint sets representing the set-theoretic intersection of a spherical surface with a convex closed set. Necessary extremum conditions are examined, and the convergence of the methods is analyzed.

  18. The Existence Problem for Steiner Networks in Strictly Convex Domains

    NASA Astrophysics Data System (ADS)

    Freire, Alexandre

    2011-05-01

    We consider the existence problem for `Steiner networks' (trivalent graphs with 2 π/3 angles at each junction) in strictly convex domains, with `Neumann' boundary conditions. For each of the three possible combinatorial possibilities, sufficient conditions on the domain are derived for existence. In addition, in each case explicit examples of nonexistence are given.

  19. Lagrange Duality Theory for Convex Control Problems,

    DTIC Science & Technology

    to be optimal is also given. The dual variables p and v corresponding to the system dynamics and state constraints are proved to be of bounded ... variation while the multiplier corresponding to the control constraints is proved to lie in 1. Finally, a control and state minimum principle is proved. If

  20. A partially inexact bundle method for convex semi-infinite minmax problems

    NASA Astrophysics Data System (ADS)

    Fuduli, Antonio; Gaudioso, Manlio; Giallombardo, Giovanni; Miglionico, Giovanna

    2015-04-01

    We present a bundle method for solving convex semi-infinite minmax problems which allows inexact solution of the inner maximization. The method is of the partially inexact oracle type, and it is aimed at reducing the occurrence of null steps and at improving bundle handling with respect to existing methods. Termination of the algorithm is proved at a point satisfying an approximate optimality criterion, and the results of some numerical experiments are also reported.

  1. Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization

    DTIC Science & Technology

    2014-12-01

    ZHANG ¶ Abstract. It has been shown in [14] that the accelerated prox-level ( APL ) method and its variant, the uniform smoothing level (USL) method...introduce two new variants of level methods, i.e., the fast APL (FAPL) method and the fast USL (FUSL) method, for solving large scale black-box and...structured convex programming problems respectively. Both FAPL and FUSL enjoy the same optimal iteration complexity as APL and USL, while the number of

  2. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  3. Numerical optimization method for packing regular convex polygons

    NASA Astrophysics Data System (ADS)

    Galiev, Sh. I.; Lisafina, M. S.

    2016-08-01

    An algorithm is presented for the approximate solution of the problem of packing regular convex polygons in a given closed bounded domain G so as to maximize the total area of the packed figures. On G a grid is constructed whose nodes generate a finite set W on G, and the centers of the figures to be packed can be placed only at some points of W. The problem of packing these figures with centers in W is reduced to a 0-1 linear programming problem. A two-stage algorithm for solving the resulting problems is proposed. The algorithm finds packings of the indicated figures in an arbitrary closed bounded domain on the plane. Numerical results are presented that demonstrate the effectiveness of the method.

  4. Multiband RF pulses with improved performance via convex optimization.

    PubMed

    Shang, Hong; Larson, Peder E Z; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W; Ohliger, Michael A; Pauly, John M; Lustig, Michael; Vigneron, Daniel B

    2016-01-01

    Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) (13)C MRI, a dualband saturation RF pulse for (1)H MR spectroscopy, and a pre-saturation pulse for HP (13)C study were developed and tested.

  5. Multiband RF Pulses with Improved Performance via Convex Optimization

    PubMed Central

    Shang, Hong; Larson, Peder E. Z.; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W.; Ohliger, Michael A.; Pauly, John M.; Lustig, Michael; Vigneron, Daniel B.

    2016-01-01

    Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on “don’t-care” regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) 13C MRI, a dualband saturation RF pulse for 1H MR spectroscopy, and a pre-saturation pulse for HP 13C study were developed and tested. PMID:26754063

  6. Convex hull or crossing avoidance? Solution heuristics in the traveling salesperson problem.

    PubMed

    MacGregor, James N; Chronicle, Edward P; Ormerod, Thomas C

    2004-03-01

    Untrained adults appear to have access to cognitive processes that allow them to perform well in the Euclidean version of the traveling salesperson problem (E-TSP). They do so despite the famous computational intractability of the problem, which stems from its combinatorial complexity. A current hypothesis is the humans' good performance is based on following a strategy of connecting boundary points in order (the convex hull hypothesis). Recently, an alternative has been proposed, that performance is governed by a strategy of avoiding crossings. We examined the crossing avoidance hypothesis from the perspectives of its capacity to explain existing data, its theoretical adequacy, and its ability to explain the results of three new experiments. In Experiment 1, effects on the solution quality of number of points versus number of interior points were compared. In Experiment 2, the distributions of observed paths were compared with those predicted from the two hypotheses. In Experiment 3, figural effects were varied to induce crossings. The results of the experiments were more consistent with the convex hull than with the crossing avoidance hypothesis. Despite its simplicity and intuitive appeal, crossing avoidance does not provide a complete alternative to the convex hull hypothesis. Further elucidation of human strategies and heuristics for optimization problems such as the E-TSP will aid our understanding of how cognitive processes have adapted to the demands of combinatorial difficulty.

  7. Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process

    PubMed Central

    Yin, Chuancun; Yuen, Kam Chuen; Shen, Ying

    2015-01-01

    We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655

  8. Stable iterative Lagrange principle in convex programming as a tool for solving unstable problems

    NASA Astrophysics Data System (ADS)

    Kuterin, F. A.; Sumin, M. I.

    2017-01-01

    A convex programming problem in a Hilbert space with an operator equality constraint and a finite number of functional inequality constraints is considered. All constraints involve parameters. The close relation of the instability of this problem and, hence, the instability of the classical Lagrange principle for it to its regularity properties and the subdifferentiability of the value function in the problem is discussed. An iterative nondifferential Lagrange principle with a stopping rule is proved for the indicated problem. The principle is stable with respect to errors in the initial data and covers the normal, regular, and abnormal cases of the problem and the case where the classical Lagrange principle does not hold. The possibility of using the stable sequential Lagrange principle for directly solving unstable optimization problems is discussed. The capabilities of this principle are illustrated by numerically solving the classical ill-posed problem of finding the normal solution of a Fredholm integral equation of the first kind.

  9. Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Girard, J. N.; Garsden, H.; Starck, J. L.; Corbel, S.; Woiselle, A.; Tasse, C.; McKean, J. P.; Bobin, J.

    2015-08-01

    Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ≈ 2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.

  10. Interactive breast mass segmentation using a convex active contour model with optimal threshold values.

    PubMed

    Acho, Sussan Nkwenti; Rae, William Ian Duncombe

    2016-10-01

    A convex active contour model requires a predefined threshold value to determine the global solution for the best contour to use when doing mass segmentation. Fixed thresholds or manual tuning of threshold values for optimum mass boundary delineation are impracticable. A proposed method is presented to determine an optimized mass-specific threshold value for the convex active contour derived from the probability matrix of the mass with the particle swarm optimization method. We compared our results with the Chan-Vese segmentation and a published global segmentation model on masses detected on direct digital mammograms. The regional term of the convex active contour model maximizes the posterior partitioning probability for binary segmentation. Suppose the probability matrix is binary thresholded using the particle swarm optimization to obtain a value T1, we define the optimal threshold value for the global minimizer of the convex active contour as the mean intensity of all pixels whose probabilities are greater than T1. The mean Jaccard similarity indices were 0.89±0.07 for the proposed/Chan-Vese method and 0.88±0.06 for the proposed/published segmentation model. The mean Euclidean distance between Fourier descriptors of the segmented areas was 0.05±0.03 for the proposed/Chan-Vese method and 0.06±0.04 for the proposed/published segmentation model. This efficient method avoids problems of initial level set contour placement and contour re-initialization. Moreover, optimum segmentation results are realized for all masses improving on the fixed threshold value of 0.5 proposed elsewhere. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.

    PubMed

    Al-Mulhem, M; Al-Maghrabi, T

    1998-01-01

    This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.

  12. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  13. Reentry trajectory optimization with waypoint and no-fly zone constraints using multiphase convex programming

    NASA Astrophysics Data System (ADS)

    Zhao, Dang-Jun; Song, Zheng-Yu

    2017-08-01

    This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.

  14. Ultrafast Quantum Process Tomography via Continuous Measurement and Convex Optimization

    NASA Astrophysics Data System (ADS)

    Baldwin, Charles; Riofrio, Carlos; Deutsch, Ivan

    2013-03-01

    Quantum process tomography (QPT) is an essential tool to diagnose the implementation of a dynamical map. However, the standard protocol is extremely resource intensive. For a Hilbert space of dimension d, it requires d2 different input preparations followed by state tomography via the estimation of the expectation values of d2 - 1 orthogonal observables. We show that when the process is nearly unitary, we can dramatically improve the efficiency and robustness of QPT through a collective continuous measurement protocol on an ensemble of identically prepared systems. Given the measurement history we obtain the process matrix via a convex program that optimizes a desired cost function. We study two estimators: least-squares and compressive sensing. Both allow rapid QPT due to the condition of complete positivity of the map; this is a powerful constraint to force the process to be physical and consistent with the data. We apply the method to a real experimental implementation, where optimal control is used to perform a unitary map on a d = 8 dimensional system of hyperfine levels in cesium atoms, and obtain the measurement record via Faraday spectroscopy of a laser probe. Supported by the NSF

  15. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  16. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  17. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  18. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    SciTech Connect

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadratic programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).

  19. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    PubMed

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2016-12-22

    In this paper, based on CR calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  20. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The

  1. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  2. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  3. Convex-Optimization-Based Compartmental Pharmacokinetic Analysis for Prostate Tumor Characterization Using DCE-MRI.

    PubMed

    Ambikapathi, ArulMurugan; Chan, Tsung-Han; Lin, Chia-Hsiang; Yang, Fei-Shih; Chi, Chong-Yung; Wang, Yue

    2016-04-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a powerful imaging modality to study the pharmacokinetics in a suspected cancer/tumor tissue. The pharmacokinetic (PK) analysis of prostate cancer includes the estimation of time activity curves (TACs), and thereby, the corresponding kinetic parameters (KPs), and plays a pivotal role in diagnosis and prognosis of prostate cancer. In this paper, we endeavor to develop a blind source separation algorithm, namely convex-optimization-based KPs estimation (COKE) algorithm for PK analysis based on compartmental modeling of DCE-MRI data, for effective prostate tumor detection and its quantification. The COKE algorithm first identifies the best three representative pixels in the DCE-MRI data, corresponding to the plasma, fast-flow, and slow-flow TACs, respectively. The estimation accuracy of the flux rate constants (FRCs) of the fast-flow and slow-flow TACs directly affects the estimation accuracy of the KPs that provide the cancer and normal tissue distribution maps in the prostate region. The COKE algorithm wisely exploits the matrix structure (Toeplitz, lower triangular, and exponential decay) of the original nonconvex FRCs estimation problem, and reformulates it into two convex optimization problems that can reliably estimate the FRCs. After estimation of the FRCs, the KPs can be effectively estimated by solving a pixel-wise constrained curve-fitting (convex) problem. Simulation results demonstrate the efficacy of the proposed COKE algorithm. The COKE algorithm is also evaluated with DCE-MRI data of four different patients with prostate cancer and the obtained results are consistent with clinical observations.

  4. Lateral ventricle segmentation of 3D pre-term neonates US using convex optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; Ukwatta, Eranga; Fenster, Aaron

    2013-01-01

    Intraventricular hemorrhage (IVH) is a common disease among preterm infants with an occurrence of 12-20% in those born at less than 35 weeks gestational age. Neonates at risk of IVH are monitored by conventional 2D ultrasound (US) for hemorrhage and potential ventricular dilation. Compared to 2D US relying on linear measurements from a single slice and visually estimates to determine ventricular dilation, 3D US can provide volumetric ventricle measurements, more sensitive to longitudinal changes in ventricular volume. In this work, we propose a global optimization-based surface evolution approach to the segmentation of the lateral ventricles in preterm neonates with IVH. The proposed segmentation approach makes use of convex optimization technique in combination with a subject-specific shape model. We show that the introduced challenging combinatorial optimization problem can be solved globally by means of convex relaxation. In this regard, we propose a coupled continuous max-flow model, which derives a new and efficient dual based algorithm, that can be implemented on GPUs to achieve a high-performance in numerics. Experiments demonstrate the advantages of our approach in both accuracy and efficiency. To the best of our knowledge, this paper reports the first study on semi-automatic segmentation of lateral ventricles in neonates with IVH from 3D US images.

  5. A Cutting Surface Algorithm for Semi-Infinite Convex Programming with an Application to Moment Robust Optimization

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2014-01-01

    We present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems and use it to develop a novel algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. Moments of arbitrary order, as well as nonpolynomial moments, can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum and stochastic programming at the other. Although our primary motivation is to solve distributionally robustmore » optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is applicable to problems with nondifferentiable semi-infinite constraints indexed by an infinite dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of our algorithm even in the solution of traditional semi-infinite convex programming problems, whose constraints are differentiable, and are indexed by an index set of low dimension. After the rate of convergence analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the central cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.« less

  6. A Cutting Surface Algorithm for Semi-Infinite Convex Programming with an Application to Moment Robust Optimization

    SciTech Connect

    Mehrotra, Sanjay; Papp, Dávid

    2014-01-01

    We present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems and use it to develop a novel algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. Moments of arbitrary order, as well as nonpolynomial moments, can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum and stochastic programming at the other. Although our primary motivation is to solve distributionally robust optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is applicable to problems with nondifferentiable semi-infinite constraints indexed by an infinite dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of our algorithm even in the solution of traditional semi-infinite convex programming problems, whose constraints are differentiable, and are indexed by an index set of low dimension. After the rate of convergence analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the central cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.

  7. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  8. Scalable analysis of nonlinear systems using convex optimization

    NASA Astrophysics Data System (ADS)

    Papachristodoulou, Antonis

    In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of

  9. On the existence of convex classical solutions to a generalized Prandtl-Batchelor free boundary problem

    NASA Astrophysics Data System (ADS)

    Acker, A.

    Under reasonably general assumptions, we prove the existence of convex classical solutions for the Prandtl-Batchelor free boundary problem in fluid dynamics, in which a flow of constant vorticity density is embedded in a potential flow, with a vortex sheet of constant vorticity density as the flow interface. These results apply to Batchelor flows which are confined to a bounded, convex vessel, and for which the limiting interior flow-speed exceeds the limiting exterior flow-speed along the interface.

  10. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  11. Exact Convex Relaxation of Optimal Power Flow in Radial Networks

    SciTech Connect

    Gan, LW; Li, N; Topcu, U; Low, SH

    2015-01-01

    The optimal power flow (OPF) problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. It is nonconvex. We prove that a global optimum of OPF can be obtained by solving a second-order cone program, under a mild condition after shrinking the OPF feasible set slightly, for radial power networks. The condition can be checked a priori, and holds for the IEEE 13, 34, 37, 123-bus networks and two real-world networks.

  12. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  13. Entropy based primal-dual algorithm for convex and linear cost transportation problems with serial and parallel implementations

    SciTech Connect

    Chabini, I.; Florian, M.

    1994-12-31

    In this paper we present a new class of sequential and parallel algorithms for transportation problems with linear and convex costs. First, we consider a capacitated transportation problem with an entropy type objective function. We show that this problem has some interesting properties, namely that its optimal solution verifies both the non negativity and capacity constraints. Then, we give a new solution method for this problem. The algorithm consists of a sequence of {open_quotes}balancing{close_quotes} iterations on the conservation of flow constraints which may be viewed as a generalization of the well known RAS algorithm for matrix balancing. Then we prove the convergence of this method and extend it to strictly convex and linear cost transportation problems. For differentiable convex costs we develop an adaptation where each projection is an entropy type capacitated transportation problem. For linear costs, we prove a triple equivalence between the entropy projection method, the proximal minimization approach (with our entropy type function) and an entropy barrier method. We give a convergence rate analysis for strongly convex costs and linear objectif functions. We show efficient implementations on both serial and parallel environments. Computational results indicate that this methods yields very encouraging results. We solve large problems with several million variables on a network of transputers and Sun workstations. For the linear case, the serial implementation is compared to some network simplex codes like RELAX and RNET. Computational experiments indicate that this algorithm can outperform both RELAX and RNET. The parallel implementations are analysed using especially a new measure of performance developed by the authors. The results demonstrate that this measure can give more information than the classical measure of speedup. Some unexpected behaviors are reported.

  14. Entropy based primal-dual algorithm for convex and linear cost transportation problems with serial and parallel implementations

    SciTech Connect

    Chabini, I.; Florian, M.

    1994-12-31

    In this paper we present a new class of sequential and parallel algorithms for transportation problems with linear and convex costs. First, we consider a capacitated transportation problem with an entropy type objectif function. We show that this problem has some interesting properties, namely that its optimal solution verifies both the non negativity and capacity constraints. Then, we give a new solution method for this problem. The algorithm consists of a sequence of {open_quotes}balancing{close_quotes} iterations on the conservation of flow constraints which may be viewed as a generalization of the well known RAS algorithm for matrix balancing. Then we prove the convergence of this method and extend it to strictly convex and linear cost transportation problems. For differentiable convex costs we develop an adaptation where each projection is an entropy type capacitated transportation problem. For linear costs, we prove a triple equivalence between the entropy projection method, the proximal minimization approach (with our entropy type function) and an entropy barrier method. We give a convergence rate analysis for strongly convex costs and linear objectif functions. We show efficient implementations on both serial and parallel environments. Computational results indicate that this methods yields very encouraging results. We solve large problems with several million variables on a network of transputers and Sun workstations. For the linear case, the serial implementation is compared to some network simplex codes like RELAX and RNET. Computational experiments indicate that this algorithm can outperform both RELAX and RNET. The parallel implementations are analysed using especially a new measure of performance developed by the authors. The results demonstrate that this measure can give more information than the classical measure of speedup. Some unexpected behaviors are reported.

  15. Normal Vector Projection Method used for Convex Optimization of Chan-Vese Model for Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wei, W. B.; Tan, L.; Jia, M. Q.; Pan, Z. K.

    2017-01-01

    The variational level set method is one of the main methods of image segmentation. Due to signed distance functions as level sets have to keep the nature of the functions through numerical remedy or additional technology in an evolutionary process, it is not very efficient. In this paper, a normal vector projection method for image segmentation using Chan-Vese model is proposed. An equivalent formulation of Chan-Vese model is used by taking advantage of property of binary level set functions and combining with the concept of convex relaxation. Threshold method and projection formula are applied in the implementation. It can avoid the above problems and obtain a global optimal solution. Experimental results on both synthetic and real images validate the effects of the proposed normal vector projection method, and show advantages over traditional algorithms in terms of computational efficiency.

  16. Rotationally resliced 3D prostate TRUS segmentation using convex optimization with shape priors.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Fenster, Aaron

    2015-02-01

    Efficient and accurate segmentations of 3D end-firing transrectal ultrasound (TRUS) images play an important role in planning of 3D TRUS guided prostate biopsy. However, poor image quality of the input 3D TRUS images, such as strong imaging artifacts and speckles, often makes it a challenging task to extract the prostate boundaries accurately and efficiently. In this paper, the authors propose a novel convex optimization-based approach to delineate the prostate surface from a given 3D TRUS image, which reduces the original 3D segmentation problem to a sequence of simple 2D segmentation subproblems over the rotational reslices of the 3D TRUS volume. Essentially, the authors introduce a novel convex relaxation-based contour evolution approach to each 2D slicewise image segmentation with the joint optimization of shape information, where the learned 2D nonlinear statistical shape prior is incorporated to segment the initial slice, its result is propagated as a shape constraint to the segmentation of the following slices. In practice, the proposed segmentation algorithm is implemented on a GPU to achieve the high computational performance. Experimental results using 30 patient 3D TRUS images show that the proposed method can achieve a mean Dice similarity coefficient of 93.4% ± 2.2% in 20 s for one 3D image, outperforming the existing local-optimization-based methods, e.g., level-set and active-contour, in terms of accuracy and efficiency. In addition, inter- and intraobserver variability experiments show its good reproducibility. A semiautomatic segmentation approach is proposed and evaluated to extract the prostate boundary from 3D TRUS images acquired by a 3D end-firing TRUS guided prostate biopsy system. Experimental results suggest that it may be suitable for the clinical use involving the image guided prostate biopsy procedures.

  17. Maximizing protein translation rate in the non-homogeneous ribosome flow model: a convex optimization approach

    PubMed Central

    Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2014-01-01

    Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation–elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics. PMID

  18. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, i.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  19. The role of convexity for solving some shortest path problems in plane without triangulation

    NASA Astrophysics Data System (ADS)

    An, Phan Thanh; Hai, Nguyen Ngoc; Hoai, Tran Van

    2013-09-01

    Solving shortest path problems inside simple polygons is a very classical problem in motion planning. To date, it has usually relied on triangulation of the polygons. The question: "Can one devise a simple O(n) time algorithm for computing the shortest path between two points in a simple polygon (with n vertices), without resorting to a (complicated) linear-time triangulation algorithm?" raised by J. S. B. Mitchell in Handbook of Computational Geometry (J. Sack and J. Urrutia, eds., Elsevier Science B.V., 2000), is still open. The aim of this paper is to show that convexity contributes to the design of efficient algorithms for solving some versions of shortest path problems (namely, computing the convex hull of a finite set of points and convex rope on rays in 2D, computing approximate shortest path between two points inside a simple polygon) without triangulation on the entire polygons. New algorithms are implemented in C and numerical examples are presented.

  20. libCreme: An optimization library for evaluating convex-roof entanglement measures

    NASA Astrophysics Data System (ADS)

    Röthlisberger, Beat; Lehmann, Jörg; Loss, Daniel

    2012-01-01

    We present the software library libCreme which we have previously used to successfully calculate convex-roof entanglement measures of mixed quantum states appearing in realistic physical systems. Evaluating the amount of entanglement in such states is in general a non-trivial task requiring to solve a highly non-linear complex optimization problem. The algorithms provided here are able to achieve to do this for a large and important class of entanglement measures. The library is mostly written in the MATLAB programming language, but is fully compatible to the free and open-source OCTAVE platform. Some inefficient subroutines are written in C/C++ for better performance. This manuscript discusses the most important theoretical concepts and workings of the algorithms, focusing on the actual implementation and usage within the library. Detailed examples in the end should make it easy for the user to apply libCreme to specific problems. Program summaryProgram title:libCreme Catalogue identifier: AEKD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL version 3 No. of lines in distributed program, including test data, etc.: 4323 No. of bytes in distributed program, including test data, etc.: 70 542 Distribution format: tar.gz Programming language: Matlab/Octave and C/C++ Computer: All systems running Matlab or Octave Operating system: All systems running Matlab or Octave Classification: 4.9, 4.15 Nature of problem: Evaluate convex-roof entanglement measures. This involves solving a non-linear (unitary) optimization problem. Solution method: Two algorithms are provided: A conjugate-gradient method using a differential-geometric approach and a quasi-Newton method together with a mapping to Euclidean space. Running time: Typically seconds to minutes for a density matrix of a few low-dimensional systems and a decent implementation of the

  1. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  2. Study on feed forward neural network convex optimization for LiFePO4 battery parameters

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.

  3. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  4. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  5. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  6. Convex Optimization of Coincidence Time Resolution for a High-Resolution PET System

    PubMed Central

    Reynolds, Paul D.; Olcott, Peter D.; Pratx, Guillem; Lau, Frances W. Y.

    2013-01-01

    We are developing a dual panel breast-dedicated positron emission tomography (PET) system using LSO scintillators coupled to position sensitive avalanche photodiodes (PSAPD). The charge output is amplified and read using NOVA RENA-3 ASICs. This paper shows that the coincidence timing resolution of the RENA-3 ASIC can be improved using certain list-mode calibrations. We treat the calibration problem as a convex optimization problem and use the RENA-3’s analog-based timing system to correct the measured data for time dispersion effects from correlated noise, PSAPD signal delays and varying signal amplitudes. The direct solution to the optimization problem involves a matrix inversion that grows order (n3) with the number of parameters. An iterative method using single-coordinate descent to approximate the inversion grows order (n). The inversion does not need to run to convergence, since any gains at high iteration number will be low compared to noise amplification. The system calibration method is demonstrated with measured pulser data as well as with two LSO-PSAPD detectors in electronic coincidence. After applying the algorithm, the 511 keV photopeak paired coincidence time resolution from the LSO-PSAPD detectors under study improved by 57%, from the raw value of 16.3 ± 0.07 ns full-width at half-maximum (FWHM) to 6.92 ± 0.02 ns FWHM (11.52 ± 0.05 ns to 4.89 ± 0.02 ns for unpaired photons). PMID:20876008

  7. Maximizing protein translation rate in the non-homogeneous ribosome flow model: a convex optimization approach.

    PubMed

    Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2014-11-06

    Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics. © 2014 The

  8. Convex Optimization Methods for Graphs and Statistical Modeling

    DTIC Science & Technology

    2011-06-01

    for Robust Linear Optimization. Oper. Res. 57 1483–1495. [15] Bickel, P. J. and Levina , E. (2008). Regularized estimation of large covari- ance...matrices. Ann. Statistics. 36 199–227. [16] Bickel, P. J. and Levina , E. (2008). Covariance regularization by threshold- ing. Ann. Statistics. 36 2577–2604...Conditional Value-at-Risk. Jour. of Risk. 2 21–41. [126] Rothman, A. J., Bickel, P. J., Levina , E., and Zhu, J. (2008). Sparse permutation invariant

  9. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  10. The Optimal Partial Transport Problem

    NASA Astrophysics Data System (ADS)

    Figalli, Alessio

    2010-02-01

    Given two densities f and g, we consider the problem of transporting a fraction {m in [0,min\\{\\|f\\|_{L^1},\\|g\\|_{L^1}\\}]} of the mass of f onto g minimizing a transportation cost. If the cost per unit of mass is given by | x - y|2, we will see that uniqueness of solutions holds for {m in [\\|fwedge g\\|_{L^1},min\\{\\|f\\|_{L^1},\\|g\\|_{L^1}\\}]} . This extends the result of C affarelli and M cCann in Ann Math (in print), where the authors consider two densities with disjoint supports. The free boundaries of the active regions are shown to be ( n - 1)-rectifiable (provided the supports of f and g have Lipschitz boundaries), and under some weak regularity assumptions on the geometry of the supports they are also locally semiconvex. Moreover, assuming f and g supported on two bounded strictly convex sets {{Ω,Λ subset mathbb {R}^n}} , and bounded away from zero and infinity on their respective supports, {C^{0,α}_loc} regularity of the optimal transport map and local C 1 regularity of the free boundaries away from {{Ω\\cap Λ}} are shown. Finally, the optimal transport map extends to a global homeomorphism between the active regions.

  11. Convex Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.

  12. General and mechanistic optimal relationships for tensile strength of doubly convex tablets under diametrical compression.

    PubMed

    Razavi, Sonia M; Gonzalez, Marcial; Cuitiño, Alberto M

    2015-04-30

    We propose a general framework for determining optimal relationships for tensile strength of doubly convex tablets under diametrical compression. This approach is based on the observation that tensile strength is directly proportional to the breaking force and inversely proportional to a non-linear function of geometric parameters and materials properties. This generalization reduces to the analytical expression commonly used for flat faced tablets, i.e., Hertz solution, and to the empirical relationship currently used in the pharmaceutical industry for convex-faced tablets, i.e., Pitt's equation. Under proper parametrization, optimal tensile strength relationship can be determined from experimental results by minimizing a figure of merit of choice. This optimization is performed under the first-order approximation that a flat faced tablet and a doubly curved tablet have the same tensile strength if they have the same relative density and are made of the same powder, under equivalent manufacturing conditions. Furthermore, we provide a set of recommendations and best practices for assessing the performance of optimal tensile strength relationships in general. Based on these guidelines, we identify two new models, namely the general and mechanistic models, which are effective and predictive alternatives to the tensile strength relationship currently used in the pharmaceutical industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang

    2016-12-01

    The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family

  14. Convex hull and tour crossings in the Euclidean traveling salesperson problem: implications for human performance studies.

    PubMed

    Van Rooij, Iris; Stege, Ulrike; Schactman, Alissa

    2003-03-01

    Recently there has been growing interest among psychologists in human performance on the Euclidean traveling salesperson problem (E-TSP). A debate has been initiated on what strategy people use in solving visually presented E-TSP instances. The most prominent hypothesis is the convex-hull hypothesis, originally proposed by MacGregor and Ormerod (1996). We argue that, in the literature so far, there is no evidence for this hypothesis. Alternatively we propose and motivate the hypothesis that people aim at avoiding crossings.

  15. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  16. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  17. Electro-Fenton oxidation of coking wastewater: optimization using the combination of central composite design and convex optimization method.

    PubMed

    Zhang, Bo; Sun, Jiwei; Wang, Qin; Fan, Niansi; Ni, Jialing; Li, Weicheng; Gao, Yingxin; Li, Yu-You; Xu, Changyou

    2017-01-12

    The electro-Fenton treatment of coking wastewater was evaluated experimentally in a batch electrochemical reactor. Based on central composite design coupled with response surface methodology, a regression quadratic equation was developed to model the total organic carbon (TOC) removal efficiency. This model was further proved to accurately predict the optimization of process variables by means of analysis of variance. With the aid of the convex optimization method, which is a global optimization method, the optimal parameters were determined as current density of 30.9 mA/cm(2), Fe(2+) concentration of 0.35 mg/L, and pH of 4.05. Under the optimized conditions, the corresponding TOC removal efficiency was up to 73.8%. The maximum TOC removal efficiency achieved can be further confirmed by the results of gas chromatography-mass spectrum analysis.

  18. On the existence of convex classical solutions to a generalized Prandtl-Batchelor free-boundary problem-II

    NASA Astrophysics Data System (ADS)

    Acker, A.

    We give an analytical proof of the existence of convex classical solutions for the (convex) Prandtl-Batchelor free boundary problem in fluid dynamics. In this problem, a convex vortex core of constant vorticity μ >0 is embedded in a closed irrotational flow inside a closed, convex vessel in ℜ 2. The unknown boundary of the vortex core is a closed curve Γ along which (v+)^2-(v^-)^2=Λ , where v+ and v- denote, respectively, the exterior and interior flow-speeds along Γ and Λ is a given constant. Our existence results all apply to the natural multidimensional mathematical generalization of the above problem. The present existence theorems are the only ones available for the Prandtl-Batchelor problem for Λ >0, because (a) the author's prior existence treatment was restricted to the case where Λ <0, and because (b) there is no analytical existence theory available for this problem in the non-convex case, regardless of the sign of Λ .

  19. Bi-convex Optimization to Learn Classifiers from Multiple Biomedical Annotations

    PubMed Central

    Wang, Xin; Bi, Jinbo

    2016-01-01

    The problem of constructing classifiers from multiple annotators who provide inconsistent training labels is important and occurs in many application domains. Many existing methods focus on the understanding and learning of the crowd behaviors. Several probabilistic algorithms consider the construction of classifiers for specific tasks using consensus of multiple labelers annotations. These methods impose a prior on the consensus and develop an expectation-maximization algorithm based on logistic regression loss. We extend the discussion to the hinge loss commonly used by support vector machines. Our formulations form bi-convex programs that construct classifiers and estimate the reliability of each labeler simultaneously. Each labeler is associated with a reliability parameter, which can be a constant, or class-dependent, or varies for different examples. The hinge loss is modified by replacing the true labels by the weighted combination of labelers’ labels with reliabilities as weights. Statistical justification is discussed to motivate the use of linear combination of labels. In parallel to the expectation-maximization algorithm for logistic based methods, efficient alternating algorithms are developed to solve the proposed bi-convex programs. Experimental results on benchmark datasets and three real-world biomedical problems demonstrate that the proposed methods either outperform or are competitive to the state of the art. PMID:27295686

  20. Bi-convex Optimization to Learn Classifiers from Multiple Biomedical Annotations.

    PubMed

    Wang, Xin; Bi, Jinbo

    2016-06-07

    The problem of constructing classifiers from multiple annotators who provide inconsistent training labels is important and occurs in many application domains. Many existing methods focus on the understanding and learning of the crowd behaviors. Several probabilistic algorithms consider the construction of classifiers for specific tasks using consensus of multiple labelers annotations. These methods impose a prior on the consensus and develop an expectation-maximization algorithm based on logistic regression loss. We extend the discussion to the hinge loss commonly used by support vector machines. Our formulations form bi-convex programs that construct classifiers and estimate the reliability of each labeler simultaneously. Each labeler is associated with a reliability parameter, which can be a constant, or class-dependent, or varies for different examples. The hinge loss is modified by replacing the true labels by the weighted combination of labelers' labels with reliabilities as weights. Statistical justification is discussed to motivate the use of linear combination of labels. In parallel to the expectation-maximization algorithm for logistic based methods, efficient alternating algorithms are developed to solve the proposed bi-convex programs. Experimental results on benchmark datasets and three real-world biomedical problems demonstrate that the proposed methods either outperform or are competitive to the state of the art.

  1. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on-board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles.

  2. Finding and proving the exact ground state of a generalized Ising model by convex optimization and MAX-SAT

    NASA Astrophysics Data System (ADS)

    Huang, Wenxuan; Kitchaev, Daniil A.; Dacek, Stephen T.; Rong, Ziqin; Urban, Alexander; Cao, Shan; Luo, Chuan; Ceder, Gerbrand

    2016-10-01

    Lattice models, also known as generalized Ising models or cluster expansions, are widely used in many areas of science and are routinely applied to the study of alloy thermodynamics, solid-solid phase transitions, magnetic and thermal properties of solids, fluid mechanics, and others. However, the problem of finding and proving the global ground state of a lattice model, which is essential for all of the aforementioned applications, has remained unresolved for relatively complex practical systems, with only a limited number of results for highly simplified systems known. In this paper, we present a practical and general algorithm that provides a provable periodically constrained ground state of a complex lattice model up to a given unit cell size and in many cases is able to prove global optimality over all other choices of unit cell. We transform the infinite-discrete-optimization problem into a pair of combinatorial optimization (MAX-SAT) and nonsmooth convex optimization (MAX-MIN) problems, which provide upper and lower bounds on the ground state energy, respectively. By systematically converging these bounds to each other, we may find and prove the exact ground state of realistic Hamiltonians whose exact solutions are difficult, if not impossible, to obtain via traditional methods. Considering that currently such practical Hamiltonians are solved using simulated annealing and genetic algorithms that are often unable to find the true global energy minimum and inherently cannot prove the optimality of their result, our paper opens the door to resolving longstanding uncertainties in lattice models of physical phenomena. An implementation of the algorithm is available at https://github.com/dkitch/maxsat-ising.

  3. Optimal structural design via optimality criteria as a nonsmooth mechanics problem

    NASA Astrophysics Data System (ADS)

    Tzaferopoulos, M. Ap.; Stravroulakis, G. E.

    1995-06-01

    In the theory of plastic structural design via optimality criteria (due to W. Prager), the optimal design problem is transformed to a nonlinear elastic structural analysis problem with appropriate stress-strain laws, which generally include complete vertical branches. In this context, the concept of structural universe (in the sense of G. Rozvany) permits the treatment of complicated optimal layout problems. Recent progress in the field of nonsmooth mechanics makes the solution of structural analysis problems with this kind of 'complete' law possible. Elements from the two fields are combined in this paper for the solution of optimal design and layout problems for structures. The optimal layout of plane trusses with various specific cost functions is studied here as a representative problem. The use of convex, continuous and piecewise linear specific cost functions for the structural members leads to problems of linear variational inequalities or equivalently piecewise linear, convex but nonsmooth optimization problems, which are solved by means of an iterative algorithm based on sequential linear programming techniques. Numerical examples illustrate the theory and its applicability to practical engineering structures. Following a parametric investigation of an optimal bridge design, certain aspects of the optimal truss layout problem are discussed, which can be extended to other types of structural systems as well.

  4. Convex hull based neuro-retinal optic cup ellipse optimization in glaucoma diagnosis.

    PubMed

    Zhang, Zhuo; Liu, Jiang; Cherian, Neetu Sara; Sun, Ying; Lim, Joo Hwee; Wong, Wing Kee; Tan, Ngan Meng; Lu, Shijian; Li, Huiqi; Wong, Tien Ying

    2009-01-01

    Glaucoma is the second leading cause of blindness. Glaucoma can be diagnosed through measurement of neuro-retinal optic cup-to-disc ratio (CDR). Automatic calculation of optic cup boundary is challenging due to the interweavement of blood vessels with the surrounding tissues around the cup. A Convex Hull based Neuro-Retinal Optic Cup Ellipse Optimization algorithm improves the accuracy of the boundary estimation. The algorithm's effectiveness is demonstrated on 70 clinical patient's data set collected from Singapore Eye Research Institute. The root mean squared error of the new algorithm is 43% better than the ARGALI system which is the state-of-the-art. This further leads to a large clinical evaluation of the algorithm involving 15 thousand patients from Australia and Singapore.

  5. Fast inference of ill-posed problems within a convex space

    NASA Astrophysics Data System (ADS)

    Fernandez-de-Cossio-Diaz, J.; Mulet, R.

    2016-07-01

    In multiple scientific and technological applications we face the problem of having low dimensional data to be justified by a linear model defined in a high dimensional parameter space. The difference in dimensionality makes the problem ill-defined: the model is consistent with the data for many values of its parameters. The objective is to find the probability distribution of parameter values consistent with the data, a problem that can be cast as the exploration of a high dimensional convex polytope. In this work we introduce a novel algorithm to solve this problem efficiently. It provides results that are statistically indistinguishable from currently used numerical techniques while its running time scales linearly with the system size. We show that the algorithm performs robustly in many abstract and practical applications. As working examples we simulate the effects of restricting reaction fluxes on the space of feasible phenotypes of a genome scale Escherichia coli metabolic network and infer the traffic flow between origin and destination nodes in a real communication network.

  6. A fast nonstationary iterative method with convex penalty for inverse problems in Hilbert spaces

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Lu, Xiliang

    2014-04-01

    In this paper we consider the computation of approximate solutions for inverse problems in Hilbert spaces. In order to capture the special feature of solutions, non-smooth convex functions are introduced as penalty terms. By exploiting the Hilbert space structure of the underlying problems, we propose a fast iterative regularization method which reduces to the classical nonstationary iterated Tikhonov regularization when the penalty term is chosen to be the square of norm. Each iteration of the method consists of two steps: the first step involves only the operator from the problem while the second step involves only the penalty term. This splitting character has the advantage of making the computation efficient. In case the data is corrupted by noise, a stopping rule is proposed to terminate the method and the corresponding regularization property is established. Finally, we test the performance of the method by reporting various numerical simulations, including the image deblurring, the determination of source term in Poisson equation, and the de-autoconvolution problem.

  7. Multi-Stage Convex Relaxation Methods for Machine Learning

    DTIC Science & Technology

    2013-03-01

    Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.

  8. Class and Home Problems: Optimization Problems

    ERIC Educational Resources Information Center

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  9. Class and Home Problems: Optimization Problems

    ERIC Educational Resources Information Center

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  10. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-04-01

    We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.

  11. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  12. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-21

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient's anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant's RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B(1)(+) field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient's anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  13. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-01

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient’s anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant’s RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B1+ field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient’s anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  14. Efficient convex optimization approach to 3D non-rigid MR-TRUS registration.

    PubMed

    Sun, Yue; Yuan, Jing; Rajchl, Martin; Qiu, Wu; Romagnoli, Cesare; Fenster, Aaron

    2013-01-01

    In this study, we propose an efficient non-rigid MR-TRUS deformable registration method to improve the accuracy of targeting suspicious locations during a 3D ultrasound (US) guided prostate biopsy. The proposed deformable registration approach employs the multi-channel modality independent neighbourhood descriptor (MIND) as the local similarity feature across the two modalities of MR and TRUS, and a novel and efficient duality-based convex optimization based algorithmic scheme is introduced to extract the deformations which align the two MIND descriptors. The registration accuracy was evaluated using 10 patient images by measuring the TRE of manually identified corresponding intrinsic fiducials in the whole gland and peripheral zone, and performance metrics (DSC, MAD and MAXD) for the apex, mid-gland and base of the prostate were also calculated by comparing two manually segmented prostate surfaces in the registered 3D MR and TRUS images. Experimental results show that the proposed method yielded an overall mean TRE of 1.74 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.

  15. Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming.

    DTIC Science & Technology

    1982-12-21

    Operations Research, Vol. 18, No. 1, pp. 107-118. FENCHEL , W. (1953). Convex Cones , Sets and Functions . Lecture Notes, Princeton University Press... FUNCTION AND CONVEXITY PROPERTIES OF THE SOLUTION SET MAP ..... .............. ... 40 5. CONCLUDING REMARKS ................ ...... 48 REFERENCES...I * -3- T-471 is the set conv(A) - {x1 + (l-X)x 12 XX 2 e A, A [0,1]1 . The set K CEr is a cone if x e K implies x e K for all > 0 ,and K is a convex

  16. Automatic Treatment Planning with Convex Imputing

    NASA Astrophysics Data System (ADS)

    Sayre, G. A.; Ruan, D.

    2014-03-01

    Current inverse optimization-based treatment planning for radiotherapy requires a set of complex DVH objectives to be simultaneously minimized. This process, known as multi-objective optimization, is challenging due to non-convexity in individual objectives and insufficient knowledge in the tradeoffs among the objective set. As such, clinical practice involves numerous iterations of human intervention that is costly and often inconsistent. In this work, we propose to address treatment planning with convex imputing, a new-data mining technique that explores the existence of a latent convex objective whose optimizer reflects the DVH and dose-shaping properties of previously optimized cases. Using ten clinical prostate cases as the basis for comparison, we imputed a simple least-squares problem from the optimized solutions of the prostate cases, and show that the imputed plans are more consistent than their clinical counterparts in achieving planning goals.

  17. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  18. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods

    DTIC Science & Technology

    2016-11-16

    Information Matrices of Elliptically Contoured Dis- tributions...robust in the face of many potentially similar but varying covariance matrices or array responses. Fisher Information Criteria for Sensors 3 Fortunately...ln(det(X )) as the final objective function. It is proven below that this function is concave (convex up) for all positive semidefinite matrices , X

  19. Lossless Convexification of Control Constraints for a Class of Nonlinear Optimal Control Problems

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars; Acikmese, Behcet; Carson, John M.,III

    2012-01-01

    In this paper we consider a class of optimal control problems that have continuous-time nonlinear dynamics and nonconvex control constraints. We propose a convex relaxation of the nonconvex control constraints, and prove that the optimal solution to the relaxed problem is the globally optimal solution to the original problem with nonconvex control constraints. This lossless convexification enables a computationally simpler problem to be solved instead of the original problem. We demonstrate the approach in simulation with a planetary soft landing problem involving a nonlinear gravity field.

  20. A numerical approach to the non-convex dynamic problem of pipeline-soil interaction under environmental effects

    NASA Astrophysics Data System (ADS)

    Liolios, K.; Georgiev, I.; Liolios, A.

    2012-10-01

    A numerical approach for a problem arising in Civil and Environmental Engineering is presented. This problem concerns the dynamic soil-pipeline interaction, when unilateral contact conditions due to tensionless and elastoplastic softening/fracturing behaviour of the soil as well as due to gapping caused by earthquake excitations are taken into account. Moreover, soil-capacity degradation due to environmental effects are taken into account. The mathematical formulation of this dynamic elastoplasticity problem leads to a system of partial differential equations with equality domain and inequality boundary conditions. The proposed numerical approach is based on a double discretization, in space and time, and on mathematical programming methods. First, in space the finite element method (FEM) is used for the simulation of the pipeline and the unilateral contact interface, in combination with the boundary element method (BEM) for the soil simulation. Concepts of the non-convex analysis are used. Next, with the aid of Laplace transform, the equality problem conditions are transformed to convolutional ones involving as unknowns the unilateral quantities only. So the number of unknowns is significantly reduced. Then a marching-time approach is applied and a non-convex linear complementarity problem is solved in each time-step.

  1. More on conditions of local and global minima coincidence in discrete optimization problems

    SciTech Connect

    Lebedeva, T.T.; Sergienko, I.V.; Soltan, V.P.

    1994-05-01

    In some areas of discrete optimization, it is necessary to isolate classes of problems whose target functions do not have local or strictly local minima that differ from the global minima. Examples include optimizations on discrete metric spaces and graphs, lattices and partially ordered sets, and linear combinatorial problems. A unified schema that to a certain extent generalizes the convexity models on which the above-cited works are based has been presented in articles. This article is a continuation of that research.

  2. Optimization and geophysical inverse problems

    SciTech Connect

    Barhen, J.; Berryman, J.G.; Borcea, L.; Dennis, J.; de Groot-Hedlin, C.; Gilbert, F.; Gill, P.; Heinkenschloss, M.; Johnson, L.; McEvilly, T.; More, J.; Newman, G.; Oldenburg, D.; Parker, P.; Porto, B.; Sen, M.; Torczon, V.; Vasco, D.; Woodward, N.B.

    2000-10-01

    A fundamental part of geophysics is to make inferences about the interior of the earth on the basis of data collected at or near the surface of the earth. In almost all cases these measured data are only indirectly related to the properties of the earth that are of interest, so an inverse problem must be solved in order to obtain estimates of the physical properties within the earth. In February of 1999 the U.S. Department of Energy sponsored a workshop that was intended to examine the methods currently being used to solve geophysical inverse problems and to consider what new approaches should be explored in the future. The interdisciplinary area between inverse problems in geophysics and optimization methods in mathematics was specifically targeted as one where an interchange of ideas was likely to be fruitful. Thus about half of the participants were actively involved in solving geophysical inverse problems and about half were actively involved in research on general optimization methods. This report presents some of the topics that were explored at the workshop and the conclusions that were reached. In general, the objective of a geophysical inverse problem is to find an earth model, described by a set of physical parameters, that is consistent with the observational data. It is usually assumed that the forward problem, that of calculating simulated data for an earth model, is well enough understood so that reasonably accurate synthetic data can be generated for an arbitrary model. The inverse problem is then posed as an optimization problem, where the function to be optimized is variously called the objective function, misfit function, or fitness function. The objective function is typically some measure of the difference between observational data and synthetic data calculated for a trial model. However, because of incomplete and inaccurate data, the objective function often incorporates some additional form of regularization, such as a measure of smoothness

  3. A novel neural network for nonlinear convex programming.

    PubMed

    Gao, Xing-Bao

    2004-05-01

    In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  4. Robust Utility Maximization Under Convex Portfolio Constraints

    SciTech Connect

    Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  5. Perceptual convexity

    NASA Astrophysics Data System (ADS)

    Kupeev, Konstantin Y.; Wolfson, Haim J.

    1995-08-01

    Often objects which are not convex in the mathematical sense are treated as `perceptually convex'. We present an algorithm for recognition of the perceptual convexity of a 2D contour. We start by reducing the notion of `a contour is perceptually convex' to the notion of `a contour is Y-convex'. The latter reflects an absence of large concavities in the OY direction of an XOY frame. Then we represented a contour by a G-graph and modify the slowest descent-- the small leaf trimming procedure recently introduced for the estimation of shape similarity. We prove that executing the slowest descent dow to a G-graph consisting of 3 vertices allows us to detect large concavities in the OY direction. This allows us to recognize the perceptual convexity of an input contour.

  6. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  7. Basic Studies in Combinatorial and Nondifferentiable Optimization.

    DTIC Science & Technology

    1978-03-01

    control problems with linear dynamics, convex cost, and convex inequality state and control constraints is analyzed...points. S.K. MITTER, Lagrange Duality Theory for Convex Control Problems , (with W.W. Hager), Journal of Control and Optimization , 14, August 1976, pp...Lagrange Duality Theory for Convex Control Problems ,” Journal of Control and Optimization , 14, August 1976, pp. 843—856. T.

  8. Hierarchical particle swarm optimizer for minimizing the non-convex potential energy of molecular structure.

    PubMed

    Cheung, Ngaam J; Shen, Hong-Bin

    2014-11-01

    The stable conformation of a molecule is greatly important to uncover the secret of its properties and functions. Generally, the conformation of a molecule will be the most stable when it is of the minimum potential energy. Accordingly, the determination of the conformation can be solved in the optimization framework. It is, however, not an easy task to achieve the only conformation with the lowest energy among all the potential ones because of the high complexity of the energy landscape and the exponential computation increasing with molecular size. In this paper, we develop a hierarchical and heterogeneous particle swarm optimizer (HHPSO) to deal with the problem in the minimization of the potential energy. The proposed method is evaluated over a scalable simplified molecular potential energy function with up to 200 degrees of freedom and a realistic energy function of pseudo-ethane molecule. The experimental results are compared with other six PSO variants and four genetic algorithms. The results show HHPSO is significantly better than the compared PSOs with p-value less than 0.01277 over molecular potential energy function. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. A combination of concave/convex surfaces for field-enhancement optimization: the indented nanocone.

    PubMed

    García-Etxarri, Aitzol; Apell, Peter; Käll, Mikael; Aizpurua, Javier

    2012-11-05

    We introduce a design strategy to maximize the Near Field (NF) enhancement near plasmonic antennas. We start by identifying and studying the basic electromagnetic effects that contribute to the electric near field enhancement. Next, we show how the concatenation of a convex and a concave surface allows merging all the effects on a single, continuous nanoantenna. As an example of this NF maximization strategy, we engineer a nanostructure, the indented nanocone. This structure, combines all the studied NF maximization effects with a synergistic boost provided by a Fano-like interference effect activated by the presence of the concave surface. As a result, the antenna exhibits a NF amplitude enhancement of ~ 800, which transforms into ~1600 when coupled to a perfect metallic surface. This strong enhancement makes the proposed structure a robust candidate to be used in field enhancement based technologies. Further elaborations of the concept may produce even larger and more effective enhancements.

  10. Convex Formulations of Learning from Crowds

    NASA Astrophysics Data System (ADS)

    Kajino, Hiroshi; Kashima, Hisashi

    It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.

  11. Statistical mechanics of the inverse Ising problem and the optimal objective function

    NASA Astrophysics Data System (ADS)

    Berg, Johannes

    2017-08-01

    The inverse Ising problem seeks to reconstruct the parameters of an Ising Hamiltonian on the basis of spin configurations sampled from the Boltzmann measure. Over the last decade, many applications of the inverse Ising problem have arisen, driven by the advent of large-scale data across different scientific disciplines. Recently, strategies to solve the inverse Ising problem based on convex optimisation have proven to be very successful. These approaches maximise particular objective functions with respect to the model parameters. Examples are the pseudolikelihood method and interaction screening. In this paper, we establish a link between approaches to the inverse Ising problem based on convex optimisation and the statistical physics of disordered systems. We characterise the performance of an arbitrary objective function and calculate the objective function which optimally reconstructs the model parameters. We evaluate the optimal objective function within a replica-symmetric ansatz and compare the results of the optimal objective function with other reconstruction methods. Apart from giving a theoretical underpinning to solving the inverse Ising problem by convex optimisation, the optimal objective function outperforms state-of-the-art methods, albeit by a small margin.

  12. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  13. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  14. On the Dirichlet and Serrin Problems for the Inhomogeneous Infinity Laplacian in Convex Domains: Regularity and Geometric Results

    NASA Astrophysics Data System (ADS)

    Crasta, Graziano; Fragalà, Ilaria

    2015-12-01

    Given an open bounded subset Ω of {{R}^n}, which is convex and satisfies an interior sphere condition, we consider the pde {-Δ_{∞} u = 1} in Ω, subject to the homogeneous boundary condition u = 0 on ∂Ω. We prove that the unique solution to this Dirichlet problem is power-concave (precisely, 3/4 concave) and it is of class C 1( Ω). We then investigate the overdetermined Serrin-type problem, formerly considered in Buttazzo and Kawohl (Int Math Res Not, pp 237-247, 2011), obtained by adding the extra boundary condition {|nabla u| = a} on ∂Ω; by using a suitable P-function we prove that, if Ω satisfies the same assumptions as above and in addition contains a ball which touches ∂Ω at two diametral points, then the existence of a solution to this Serrin-type problem implies that necessarily the cut locus and the high ridge of Ω coincide. In turn, in dimension n = 2, this entails that Ω must be a stadium-like domain, and in particular it must be a ball in case its boundary is of class C 2.

  15. Some Tours Are More Equal than Others: The Convex-Hull Model Revisited with Lessons for Testing Models of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    Tak, Susanne; Plaisier, Marco; van Rooij, Iris

    2008-01-01

    To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…

  16. Some Tours Are More Equal than Others: The Convex-Hull Model Revisited with Lessons for Testing Models of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    Tak, Susanne; Plaisier, Marco; van Rooij, Iris

    2008-01-01

    To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…

  17. Convex Relaxation For Hard Problem In Data Mining And Sensor Localization

    DTIC Science & Technology

    2017-04-13

    Drusvyatskiy, G. Pataki, and H. Wolkowicz. Coordinate shadows of semidefinite and Euclidean distance matrices . SIAM J. Optim., 25(2):1160–1178, 2015. [2] D...and H. Wolkowicz. Rank restricted semidefinite matrices and image closedness. Technical report, University of Waterloo, Waterloo, Ontario, 2016

  18. Applying optimization software libraries to engineering problems

    NASA Technical Reports Server (NTRS)

    Healy, M. J.

    1984-01-01

    Nonlinear programming, preliminary design problems, performance simulation problems trajectory optimization, flight computer optimization, and linear least squares problems are among the topics covered. The nonlinear programming applications encountered in a large aerospace company are a real challenge to those who provide mathematical software libraries and consultation services. Typical applications include preliminary design studies, data fitting and filtering, jet engine simulations, control system analysis, and trajectory optimization and optimal control. Problem sizes range from single-variable unconstrained minimization to constrained problems with highly nonlinear functions and hundreds of variables. Most of the applications can be posed as nonlinearly constrained minimization problems. Highly complex optimization problems with many variables were formulated in the early days of computing. At the time, many problems had to be reformulated or bypassed entirely, and solution methods often relied on problem-specific strategies. Problems with more than ten variables usually went unsolved.

  19. A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints.

    PubMed

    Xia, Youshen; Feng, Gang; Wang, Jun

    2008-08-01

    This paper presents a novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite, it is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution. Compared with variety of the existing projection neural networks, including their extensions and modification, for solving such nonlinearly constrained optimization problems, it is shown that the proposed neural network can solve constrained convex optimization problems and a class of constrained nonconvex optimization problems and there is no restriction on the initial point. Simulation results show the effectiveness of the proposed neural network in solving nonlinearly constrained optimization problems.

  20. User-guided segmentation of preterm neonate ventricular system from 3-D ultrasound images using convex optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; McLeod, Jonathan; Chen, Yimin; de Ribaupierre, Sandrine; Fenster, Aaron

    2015-02-01

    A three-dimensional (3-D) ultrasound (US) system has been developed to monitor the intracranial ventricular system of preterm neonates with intraventricular hemorrhage (IVH) and the resultant dilation of the ventricles (ventriculomegaly). To measure ventricular volume from 3-D US images, a semi-automatic convex optimization-based approach is proposed for segmentation of the cerebral ventricular system in preterm neonates with IVH from 3-D US images. The proposed semi-automatic segmentation method makes use of the convex optimization technique supervised by user-initialized information. Experiments using 58 patient 3-D US images reveal that our proposed approach yielded a mean Dice similarity coefficient of 78.2% compared with the surfaces that were manually contoured, suggesting good agreement between these two segmentations. Additional metrics, the mean absolute distance of 0.65 mm and the maximum absolute distance of 3.2 mm, indicated small distance errors for a voxel spacing of 0.22 × 0.22 × 0.22 mm(3). The Pearson correlation coefficient (r = 0.97, p < 0.001) indicated a significant correlation of algorithm-generated ventricular system volume (VSV) with the manually generated VSV. The calculated minimal detectable difference in ventricular volume change indicated that the proposed segmentation approach with 3-D US images is capable of detecting a VSV difference of 6.5 cm(3) with 95% confidence, suggesting that this approach might be used for monitoring IVH patients' ventricular changes using 3-D US imaging. The mean segmentation times of the graphics processing unit (GPU)- and central processing unit-implemented algorithms were 50 ± 2 and 205 ± 5 s for one 3-D US image, respectively, in addition to 120 ± 10 s for initialization, less than the approximately 35 min required by manual segmentation. In addition, repeatability experiments indicated that the intra-observer variability ranges from 6.5% to 7.5%, and the inter-observer variability is 8.5% in terms

  1. Existence, uniqueness and construction of the solution of the energy transfer problem in a rigid and non-convex blackbody with temperature-dependent thermal conductivity

    NASA Astrophysics Data System (ADS)

    da Gama, Rogério Martins Saldanha

    2015-10-01

    In this paper, we study the steady-state (coupled) conduction-radiation heat transfer phenomenon in a non-convex opaque blackbody with temperature-dependent thermal conductivity. The mathematical description consists of a nonlinear partial differential equation subjected to a nonlinear boundary condition involving an integral operator that is inherently associated with the non-convexity of the body. The unknown is the absolute temperature distribution. The problem is rewritten with the aid of a Kirchhoff transformation, giving rise to linear partial differential equation and a new unknown. An iterative procedure is proposed for constructing the solution of the problem by means of a sequence of problems, each of them with an equivalent minimum principle. Proofs of convergence as well as existence and uniqueness of the solution are presented. An error estimate, for each element of the sequence, is presented too.

  2. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results.

  3. First and Second Order Necessary Conditions for Stochastic Optimal Control Problems

    SciTech Connect

    Bonnans, J. Frederic; Silva, Francisco J.

    2012-06-15

    In this work we consider a stochastic optimal control problem with either convex control constraints or finitely many equality and inequality constraints over the final state. Using the variational approach, we are able to obtain first and second order expansions for the state and cost function, around a local minimum. This fact allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second order necessary conditions are also established. We end by giving second order optimality conditions for problems with constraints on expectations of the final state.

  4. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  5. A convex max-flow segmentation of LV using subject-specific distributions on cardiac MRI.

    PubMed

    Nambakhsh, Mohammad Saleh; Yuan, Jing; Ben Ayed, Ismail; Punithakumar, Kumaradevan; Goela, Aashish; Islam, Ali; Peters, Terry; Li, Shuo

    2011-01-01

    This work studies the convex relaxation approach to the left ventricle (LV) segmentation which gives rise to a challenging multi-region seperation with the geometrical constraint. For each region, we consider the global Bhattacharyya metric prior to evaluate a gray-scale and a radial distance distribution matching. In this regard, the studied problem amounts to finding three regions that most closely match their respective input distribution model. It was previously addressed by curve evolution, which leads to sub-optimal and computationally intensive algorithms, or by graph cuts, which result in heavy metrication errors (grid bias). The proposed convex relaxation approach solves the LV segmentation through a sequence of convex sub-problems. Each sub-problem leads to a novel bound of the Bhattacharyya measure and yields the convex formulation which paves the way to build up the efficient and reliable solver. In this respect, we propose a novel flow configuration that accounts for labeling-function variations, in comparison to the existing flow-maximization configurations. We show it leads to a new convex max-flow formulation which is dual to the obtained convex relaxed sub-problem and does give the exact and global optimums to the original non-convex sub-problem. In addition, we present such flow perspective gives a new and simple way to encode the geometrical constraint of optimal regions. A comprehensive experimental evaluation on sufficient patient subjects demonstrates that our approach yields improvements in optimality and accuracy over related recent methods.

  6. Convex relaxations for gas expansion planning

    SciTech Connect

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; Hijazi, Hassan; Van Hentenryck, Pascal

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutions to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution

  7. Convex relaxations for gas expansion planning

    DOE PAGES

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less

  8. Constrained Graph Optimization: Interdiction and Preservation Problems

    SciTech Connect

    Schild, Aaron V

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  9. A new recurrent neural network for solving convex quadratic programming problems with an application to the k-winners-take-all problem.

    PubMed

    Hu, Xiaolin; Zhang, Bo

    2009-04-01

    In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all ( k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.

  10. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  11. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  12. Particle swarm optimization for complex nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos

    2016-06-01

    This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.

  13. Problem Solving through an Optimization Problem in Geometry

    ERIC Educational Resources Information Center

    Poon, Kin Keung; Wong, Hang-Chi

    2011-01-01

    This article adapts the problem-solving model developed by Polya to investigate and give an innovative approach to discuss and solve an optimization problem in geometry: the Regiomontanus Problem and its application to football. Various mathematical tools, such as calculus, inequality and the properties of circles, are used to explore and reflect…

  14. Representations in Problem Solving: A Case Study with Optimization Problems

    ERIC Educational Resources Information Center

    Villegas, Jose L.; Castro, Enrique; Gutierrez, Jose

    2009-01-01

    Introduction: Representations play an essential role in mathematical thinking. They favor the understanding of mathematical concepts and stimulate the development of flexible and versatile thinking in problem solving. Here our focus is on their use in optimization problems, a type of problem considered important in mathematics teaching and…

  15. Problem Solving through an Optimization Problem in Geometry

    ERIC Educational Resources Information Center

    Poon, Kin Keung; Wong, Hang-Chi

    2011-01-01

    This article adapts the problem-solving model developed by Polya to investigate and give an innovative approach to discuss and solve an optimization problem in geometry: the Regiomontanus Problem and its application to football. Various mathematical tools, such as calculus, inequality and the properties of circles, are used to explore and reflect…

  16. Representations in Problem Solving: A Case Study with Optimization Problems

    ERIC Educational Resources Information Center

    Villegas, Jose L.; Castro, Enrique; Gutierrez, Jose

    2009-01-01

    Introduction: Representations play an essential role in mathematical thinking. They favor the understanding of mathematical concepts and stimulate the development of flexible and versatile thinking in problem solving. Here our focus is on their use in optimization problems, a type of problem considered important in mathematics teaching and…

  17. Optimal control problems with switching points

    NASA Astrophysics Data System (ADS)

    Seywald, Hans

    1991-09-01

    An overview is presented of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  18. Combinatorial Algorithms for Optimization Problems

    DTIC Science & Technology

    1991-06-01

    since it is not known whether the problem has an SP algo- rithm. 1.3 Notation We use boldface notation for vectors and matrices . Suppose U is a matrix...Af the set of all 1-tuples of elements of Ai. A nxI the set of all m x n matrices whose entries are elements of Ai. We use the notation R, R+, Q, Z...X, Y E F and a > 0, 6 0 0, az +RY E F. A cone is pointed iff it does not contain a linear subspace. * The lineality space of a cone F is the largest

  19. Optimization problems in the Bulgarian electoral system

    NASA Astrophysics Data System (ADS)

    Konstantinov, Mihail; Yanev, Kostadin; Pelova, Galina; Boneva, Juliana

    2013-12-01

    In this paper we consider several optimization problems for the Bulgarian bi-proportional electoral systems. Experiments with data from real elections are presented. In this way a series of previous investigations of the authors is further developed.

  20. Convex Graph Invariants

    DTIC Science & Technology

    2010-12-02

    evaluating the function ΘP (A) for any fixed A,P is equivalent to solving the so-called Quadratic Assignment Problem ( QAP ), and thus we can employ various...tractable linear programming, spectral, and SDP relaxations of QAP [40, 11, 33]. In particular we discuss recent work [14] on exploiting group...symmetry in SDP relaxations of QAP , which is useful for approximately computing elementary convex graph invariants in many interesting cases. Finally in

  1. GENERALIZED CONVEXITY CONES.

    DTIC Science & Technology

    Contents: Introduction The dual cone of C (psi sub 1,..., psi sub n) Extreme rays The cone dual to an intersection of generalized convexity cones... Generalized difference quotients and multivariate convexity Miscellaneous applications of generalized convexity.

  2. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  3. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  4. Stochastic Linear Quadratic Optimal Control Problems

    SciTech Connect

    Chen, S.; Yong, J.

    2001-07-01

    This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well.

  5. On convex relaxation of graph isomorphism

    PubMed Central

    Aflalo, Yonathan; Bronstein, Alexander; Kimmel, Ron

    2015-01-01

    We consider the problem of exact and inexact matching of weighted undirected graphs, in which a bijective correspondence is sought to minimize a quadratic weight disagreement. This computationally challenging problem is often relaxed as a convex quadratic program, in which the space of permutations is replaced by the space of doubly stochastic matrices. However, the applicability of such a relaxation is poorly understood. We define a broad class of friendly graphs characterized by an easily verifiable spectral property. We prove that for friendly graphs, the convex relaxation is guaranteed to find the exact isomorphism or certify its inexistence. This result is further extended to approximately isomorphic graphs, for which we develop an explicit bound on the amount of weight disagreement under which the relaxation is guaranteed to find the globally optimal approximate isomorphism. We also show that in many cases, the graph matching problem can be further harmlessly relaxed to a convex quadratic program with only n separable linear equality constraints, which is substantially more efficient than the standard relaxation involving 2n equality and n2 inequality constraints. Finally, we show that our results are still valid for unfriendly graphs if additional information in the form of seeds or attributes is allowed, with the latter satisfying an easy to verify spectral characteristic. PMID:25713342

  6. Convex hull: a new method to determine the separation space used and to optimize operating conditions for comprehensive two-dimensional gas chromatography.

    PubMed

    Semard, Gaëlle; Peulon-Agasse, Valerie; Bruchet, Auguste; Bouillon, Jean-Philippe; Cardinaël, Pascal

    2010-08-13

    It is important to develop methods of optimizing the selection of column sets and operating conditions for comprehensive two-dimensional gas chromatography. A new method for the calculation of the percentage of separation space used was developed using Delaunay's triangulation algorithms (convex hull). This approach was compared with an existing method and showed better precision and accuracy. It was successfully applied to the selection of the most convenient column set and the geometrical parameters of second column for the analysis of 49 target compounds in wastewater.

  7. The importance of the convex hull for human performance on the traveling salesman problem: a comment on MacGregor and Ormerod (1996)

    PubMed

    Lee, M D; Vickers, D

    2000-01-01

    MacGregor and Ormerod (1996) have presented results purporting to show that human performance on visually presented traveling salesman problems, as indexed by a measure of response uncertainty, is strongly determined by the number of points in the stimulus array falling inside the convex hull, as distinct from the total number of points. It is argued that this conclusion is artifactually determined by their constrained procedure for stimulus construction, and, even if true, would be limited to arrays with fewer than around 50 points.

  8. Convex Modeling of Interactions with Strong Heredity

    PubMed Central

    Haris, Asad; Witten, Daniela; Simon, Noah

    2015-01-01

    We consider the task of fitting a regression model involving interactions among a potentially large set of covariates, in which we wish to enforce strong heredity. We propose FAMILY, a very general framework for this task. Our proposal is a generalization of several existing methods, such as VANISH [Radchenko and James, 2010], hierNet [Bien et al., 2013], the all-pairs lasso, and the lasso using only main effects. It can be formulated as the solution to a convex optimization problem, which we solve using an efficient alternating directions method of multipliers (ADMM) algorithm. This algorithm has guaranteed convergence to the global optimum, can be easily specialized to any convex penalty function of interest, and allows for a straightforward extension to the setting of generalized linear models. We derive an unbiased estimator of the degrees of freedom of FAMILY, and explore its performance in a simulation study and on an HIV sequence data set.

  9. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  10. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures

    PubMed Central

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-01-01

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called “TVDS”) is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image. PMID:27941635

  11. Optimization Problems in Multisensor and Multitarget Tracking

    DTIC Science & Technology

    2008-02-25

    optimize the the mixed integer nonlinear programming problem Minimize(x,d) cr(d) + E cij(d)xij (i,j)EA Subject To: Y xij 5 1 (i = 1,...,m), (7) jEA(i...Donald Hearn, Program Manager Optimization and Discrete Mathematics Air Force Office of Scientific Research /NL 875 North Randolph Street Suite 325...Number: FA9550-04-1-0222 Recipient: Dr Donald Hearn. Program Manager Recipient*s Address: Optimization and Discrete Mathematics Air Force Office of

  12. Quadratic optimization in ill-posed problems

    NASA Astrophysics Data System (ADS)

    Ben Belgacem, F.; Kaber, S.-M.

    2008-10-01

    Ill-posed quadratic optimization frequently occurs in control and inverse problems and is not covered by the Lax-Milgram-Riesz theory. Typically, small changes in the input data can produce very large oscillations on the output. We investigate the conditions under which the minimum value of the cost function is finite and we explore the 'hidden connection' between the optimization problem and the least-squares method. Eventually, we address some examples coming from optimal control and data completion, showing how relevant our contribution is in the knowledge of what happens for various ill-posed problems. The results we state bring a substantial improvement to the analysis of the regularization methods applied to the ill-posed quadratic optimization problems. Indeed, for the cost quadratic functions bounded from below the Lavrentiev method is just the Tikhonov regularization for the 'hidden least-squares' problem. As a straightforward result, Lavrentiev's regularization exhibits better regularization and convergence results than expected at first glance.

  13. Problem size, parallel architecture and optimal speedup

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Willard, Frank H.

    1987-01-01

    The communication and synchronization overhead inherent in parallel processing can lead to situations where adding processors to the solution method actually increases execution time. Problem type, problem size, and architecture type all affect the optimal number of processors to employ. The numerical solution of an elliptic partial differential equation is examined in order to study the relationship between problem size and architecture. The equation's domain is discretized into n sup 2 grid points which are divided into partitions and mapped onto the individual processor memories. The relationships between grid size, stencil type, partitioning strategy, processor execution time, and communication network type are analytically quantified. In so doing, the optimal number of processors was determined to assign to the solution, and identified (1) the smallest grid size which fully benefits from using all available processors, (2) the leverage on performance given by increasing processor speed or communication network speed, and (3) the suitability of various architectures for large numerical problems.

  14. Solving global optimization problems on GPU cluster

    SciTech Connect

    Barkalov, Konstantin; Gergel, Victor; Lebedev, Ilya

    2016-06-08

    The paper contains the results of investigation of a parallel global optimization algorithm combined with a dimension reduction scheme. This allows solving multidimensional problems by means of reducing to data-independent subproblems with smaller dimension solved in parallel. The new element implemented in the research consists in using several graphic accelerators at different computing nodes. The paper also includes results of solving problems of well-known multiextremal test class GKLS on Lobachevsky supercomputer using tens of thousands of GPU cores.

  15. Linear stochastic optimal control and estimation problem

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.; Lehtinen, F. K. B.

    1980-01-01

    Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.

  16. Optimization Problems: Duality and Multiplier Methods.

    DTIC Science & Technology

    1982-02-19

    provides a new computational handle on many problems in partial differential equations that can be represented as variational inequalities. Much remains...abstract entered in Block 20. If different from Report) DTIC III SUPPLEMENTARY NOTES MARO0 61M8 Nonlinear programming, stochastic programming, subgradient...following headings: (1) nonlinear programming algorithms, (2) multistage gradient analysis and nonsmooth optimization, (5) marginal values and sensitivity

  17. Optimal birth control of population dynamics. II. Problems with free final time, phase constraints, and mini-max costs.

    PubMed

    Chan, W L; Guo, B Z

    1990-03-01

    A previous analysis of optimal birth control of population systems of the McKendrick type (a distributed parameter system involving 1st order partial differential equations with nonlocal bilinear boundary control) raised 3 additional issues--free final time problem, system with phase constraints, and the mini-max control problem of a population. The free final time problem considers the minimum time problem to be a special case, but relaxes many convexity assumptions. Theorems (maximum principles) and corollaries are developed that flow from the terminology and mathematical notations set forth in the earlier article.

  18. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  19. Statistical physics of hard optimization problems

    NASA Astrophysics Data System (ADS)

    Zdeborová, Lenka

    2009-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  20. Statistical Physics of Hard Optimization Problems

    NASA Astrophysics Data System (ADS)

    Zdeborová, Lenka

    2008-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  1. Solving optimization problems on computational grids.

    SciTech Connect

    Wright, S. J.; Mathematics and Computer Science

    2001-05-01

    Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms have become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software

  2. Multi-class DTI Segmentation: A Convex Approach.

    PubMed

    Xie, Yuchen; Chen, Ting; Ho, Jeffrey; Vemuri, Baba C

    2012-10-01

    In this paper, we propose a novel variational framework for multi-class DTI segmentation based on global convex optimization. The existing variational approaches to the DTI segmentation problem have mainly used gradient-descent type optimization techniques which are slow in convergence and sensitive to the initialization. This paper on the other hand provides a new perspective on the often difficult optimization problem in DTI segmentation by providing a reasonably tight convex approximation (relaxation) of the original problem, and the relaxed convex problem can then be efficiently solved using various methods such as primal-dual type algorithms. To the best of our knowledge, such a DTI segmentation technique has never been reported in literature. We also show that a variety of tensor metrics (similarity measures) can be easily incorporated in the proposed framework. Experimental results on both synthetic and real diffusion tensor images clearly demonstrate the advantages of our method in terms of segmentation accuracy and robustness. In particular, when compared with existing state-of-the-art methods, our results demonstrate convincingly the importance as well as the benefit of using more refined and elaborated optimization method in diffusion tensor MR image segmentation.

  3. Optimal solutions of unobservable orbit determination problems

    NASA Astrophysics Data System (ADS)

    Cicci, David A.; Tapley, Byron D.

    1988-12-01

    The method of data augmentation, in the form ofa priori covariance information on the reference solution, as a means to overcome the effects of ill-conditioning in orbit determination problems has been investigated. Specifically, for the case when ill-conditioning results from parameter non-observability and an appropriatea priori covariance is unknown, methods by which thea priori covariance is optimally chosen are presented. In problems where an inaccuratea priori covariance is provided, the optimal weighting of this data set is obtained. The feasibility of these ‘ridge-type’ solution methods is demonstrated by their application to a non-observable gravity field recovery simulation. In the simulation, both ‘ridge-type’ and conventional solutions are compared. Substantial improvement in the accuracy of the conventional solution is realized by the use of these ridge-type solution methods. The solution techniques presented in this study are applicable to observable, but ill-conditioned problems as well as the unobservable problems directly addressed. For the case of observable problems, the ridge-type solutions provide an improvement in the accuracy of the ordinary least squares solutions.

  4. Interaction prediction optimization in multidisciplinary design optimization problems.

    PubMed

    Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei

    2014-01-01

    The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.

  5. Hierarchical optimization for neutron scattering problems

    SciTech Connect

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; Delaire, Olivier

    2016-06-15

    We present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  6. Hierarchical optimization for neutron scattering problems

    SciTech Connect

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; Delaire, Olivier

    2016-03-14

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  7. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  8. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  9. Hierarchical optimization for neutron scattering problems

    DOE PAGES

    Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...

    2016-03-14

    In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.

  10. Maximum margin classification based on flexible convex hulls for fault diagnosis of roller bearings

    NASA Astrophysics Data System (ADS)

    Zeng, Ming; Yang, Yu; Zheng, Jinde; Cheng, Junsheng

    2016-01-01

    A maximum margin classification based on flexible convex hulls (MMC-FCH) is proposed and applied to fault diagnosis of roller bearings. In this method, the class region of each sample set is approximated by a flexible convex hull of its training samples, and then an optimal separating hyper-plane that maximizes the geometric margin between flexible convex hulls is constructed by solving a closest pair of points problem. By using the kernel trick, MMC-FCH can be extended to nonlinear cases. In addition, multi-class classification problems can be processed by constructing binary pairwise classifiers as in support vector machine (SVM). Actually, the classical SVM also can be regarded as a maximum margin classification based on convex hulls (MMC-CH), which approximates each class region with a convex hull. The convex hull is a special case of the flexible convex hull. To train a MMC-FCH classifier, time-domain and frequency-domain statistical parameters are extracted not only from raw vibration signals but also from the resulting intrinsic mode functions (IMFs) by performing empirical mode decomposition (EMD) on the raw signals, and then the distance evaluation technique (DET) is used to select salient features from the whole statistical features. The experiments on bearing datasets show that the proposed method can reliably recognize different bearing faults.

  11. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  12. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  13. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    SciTech Connect

    Skala, Vaclav

    2016-06-08

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  14. On domains of convergence in optimization problems

    NASA Technical Reports Server (NTRS)

    Diaz, Alejandro R.; Shaw, Steven S.; Pan, Jian

    1990-01-01

    Numerical optimization algorithms require the knowledge of an initial set of design variables. Starting from an initial design x(sup 0), improved solutions are obtained by updating the design iteratively in a way prescribed by the particular algorithm used. If the algorithm is successful, convergence is achieved to a local optimal solution. Let A denote the iterative procedure that characterizes a typical optimization algorithm, applied to the problem: Find x belonging to R(sup n) that maximizes f(x) subject to x belonging to Omega contained in R(sup n). We are interested in problems with several local maxima (x(sub j))(sup *), j=1, ..., m, in the feasible design space Omega. In general, convergence of the algorithm A to a specific solution (x(sub j))(sup *) is determined by the choice of initial design x(sup 0). The domain of convergence D(sub j) of A associated with a local maximum (x(sub j))(sup *) is a subset of initial designs x(sup 0) in Omega such that the sequence (x(sup k)), k=0,1,2,... defined by x(sup k+1) = A(x(sup k)), k=0,1,... converges to (x(sub j))(sup *). The set D(sub j) is also called the basin of attraction of (x(sub j))(sup *). Cayley first proposed the problem of finding the basin of attraction for Newton's method in 1897. It has been shown that the basin of attraction for Newton's method exhibits chaotic behavior in problems with polynomial objective. This implies that there may be regions in the feasible design space where arbitrarily close starting points will converge to different local optimal solutions. Furthermore, the boundaries of the domains of convergence may have a very complex, even fractal structure. In this paper we show that even simple structural optimization problems solved using standard gradient based (first order) algorithms exhibit similar features.

  15. First-order convex feasibility algorithms for x-ray CT.

    PubMed

    Sidky, Emil Y; Jørgensen, Jakob S; Pan, Xiaochuan

    2013-03-01

    Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution-thereby facilitating the IIR algorithm design process. An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application.

  16. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  17. First-order convex feasibility algorithms for x-ray CT

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Jorgensen, Jakob S.

    2013-03-15

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution-thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144 Degree-Sign . The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application.

  18. Uniformly convex and strictly convex Orlicz spaces

    NASA Astrophysics Data System (ADS)

    Masta, Al Azhary

    2016-02-01

    In this paper we define the new norm of Orlicz spaces on ℝn through a multiplication operator on an old Orlicz spaces. We obtain some necessary and sufficient conditions that the new norm to be a uniformly convex and strictly convex spaces.

  19. MOMMOP: multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems.

    PubMed

    Wang, Yong; Li, Han-Xiong; Yen, Gary G; Song, Wu

    2015-04-01

    In the field of evolutionary computation, there has been a growing interest in applying evolutionary algorithms to solve multimodal optimization problems (MMOPs). Due to the fact that an MMOP involves multiple optimal solutions, many niching methods have been suggested and incorporated into evolutionary algorithms for locating such optimal solutions in a single run. In this paper, we propose a novel transformation technique based on multiobjective optimization for MMOPs, called MOMMOP. MOMMOP transforms an MMOP into a multiobjective optimization problem with two conflicting objectives. After the above transformation, all the optimal solutions of an MMOP become the Pareto optimal solutions of the transformed problem. Thus, multiobjective evolutionary algorithms can be readily applied to find a set of representative Pareto optimal solutions of the transformed problem, and as a result, multiple optimal solutions of the original MMOP could also be simultaneously located in a single run. In principle, MOMMOP is an implicit niching method. In this paper, we also discuss two issues in MOMMOP and introduce two new comparison criteria. MOMMOP has been used to solve 20 multimodal benchmark test functions, after combining with nondominated sorting and differential evolution. Systematic experiments have indicated that MOMMOP outperforms a number of methods for multimodal optimization, including four recent methods at the 2013 IEEE Congress on Evolutionary Computation, four state-of-the-art single-objective optimization based methods, and two well-known multiobjective optimization based approaches.

  20. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and non-convex optimization.

    PubMed

    Ning, Bende; Qu, Xiaobo; Guo, Di; Hu, Changwei; Chen, Zhong

    2013-11-01

    Reducing scanning time is significantly important for MRI. Compressed sensing has shown promising results by undersampling the k-space data to speed up imaging. Sparsity of an image plays an important role in compressed sensing MRI to reduce the image artifacts. Recently, the method of patch-based directional wavelets (PBDW) which trains geometric directions from undersampled data has been proposed. It has better performance in preserving image edges than conventional sparsifying transforms. However, obvious artifacts are presented in the smooth region when the data are highly undersampled. In addition, the original PBDW-based method does not hold obvious improvement for radial and fully 2D random sampling patterns. In this paper, the PBDW-based MRI reconstruction is improved from two aspects: 1) An efficient non-convex minimization algorithm is modified to enhance image quality; 2) PBDW are extended into shift-invariant discrete wavelet domain to enhance the ability of transform on sparsifying piecewise smooth image features. Numerical simulation results on vivo magnetic resonance images demonstrate that the proposed method outperforms the original PBDW in terms of removing artifacts and preserving edges.

  1. Optimal pre-scheduling of problem remappings

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.

  2. Optimal Planning and Problem-Solving

    NASA Technical Reports Server (NTRS)

    Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg

    2008-01-01

    CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.

  3. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  4. Convex Models of Malfunction Diagnosis in High Performance Aircraft

    DTIC Science & Technology

    1989-05-01

    initiated as in the open-loop mode: with one fixed non -zero control function. The time-dependent controller is actuated as soon as any of the state ... controllers ) the diagnosis algorithm is designed by solving 8 CONCLUI)ING REMARKS AND FUTURE RESEARCH 70 a sequence of linear optimization problems . For...Automatic Controller ............... 8 3.3 Numerical Demonstration of the Normal Dynamics ............ 8 4 Representing Control - Actuator Failure 16 5 Convex

  5. An Optimal Design Problem for Submerged Bodies,

    DTIC Science & Technology

    1984-01-01

    problem, according to the relati, n (1.7) by (1.10) u(p) = j Y(p,q)g(q)drq- j w(q) a( drq, pEDf q n- q’ r r and, again using the jump relations one sees...by Uad: (a) Af(p) = 0, pEDf , (b) + ko = 0 on y=0 (2.3) (C) Tn- = 0 on y=-h (d) n- g on F(f) (e) 7’P C.e) - - ik 04 = o(o...surface fEUad gives rise, according to Theorem 1.1, to a potential 0=0(p;f), pEDf . The class of optimization problems that we discuss below then have the

  6. Hybrid intelligent optimization methods for engineering problems

    NASA Astrophysics Data System (ADS)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and

  7. Asymptotic solution of the optimal control problem for standard systems with delay

    SciTech Connect

    Zheltikov, V.P.; Efendiev, V.V.

    1995-05-01

    The authors consider the construction of an asymptotic solution of the terminal optimal control problem using the averaging method. The optimal process is described by the equation z = eZ (t, z, z(t-l, e, u), u), z/t=[-1,0] = {var_phi}(t), where the delay is constant and of unit magnitude, z {element_of} G is an n-dimensional vector, G {contained_in} R{sup n}, e > 0 is a small parameter, t {element_of} T {triple_bond} [0, e{sup -1}], Z {var_phi} are n-dimensional vector functions, Z is strictly convex in u for any (t, z) {element_of} T X G, u {element_of} U is the r-dimensional control vector, U is a compact set.

  8. LDRD Final Report: Global Optimization for Engineering Science Problems

    SciTech Connect

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  9. Approximating stationary points of stochastic optimization problems in Banach space

    NASA Astrophysics Data System (ADS)

    Balaji, Ramamurthy; Xu, Huifu

    2008-11-01

    In this paper, we present a uniform strong law of large numbers for random set-valued mappings in separable Banach space and apply it to analyze the sample average approximation of Clarke stationary points of a nonsmooth one stage stochastic minimization problem in separable Banach space. Moreover, under Hausdorff continuity, we show that with probability approaching one exponentially fast with the increase of sample size, the sample average of a convex compact set-valued mapping converges to its expected value uniformly. The result is used to establish exponential convergence of stationary sequence under some metric regularity conditions.

  10. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE PAGES

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    2016-07-21

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  11. Tunneling and Speedup in Quantum Optimization for Permutation-Symmetric Problems

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    2016-07-01

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final cost function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Finally, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.

  12. Optimization methods for activities selection problems

    NASA Astrophysics Data System (ADS)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  13. An adaptive multi-swarm optimizer for dynamic optimization problems.

    PubMed

    Li, Changhe; Yang, Shengxiang; Yang, Ming

    2014-01-01

    The multipopulation method has been widely used to solve dynamic optimization problems (DOPs) with the aim of maintaining multiple populations on different peaks to locate and track multiple changing optima simultaneously. However, to make this approach effective for solving DOPs, two challenging issues need to be addressed. They are how to adapt the number of populations to changes and how to adaptively maintain the population diversity in a situation where changes are complicated or hard to detect or predict. Tracking the changing global optimum in dynamic environments is difficult because we cannot know when and where changes occur and what the characteristics of changes would be. Therefore, it is necessary to take these challenging issues into account in designing such adaptive algorithms. To address the issues when multipopulation methods are applied for solving DOPs, this paper proposes an adaptive multi-swarm algorithm, where the populations are enabled to be adaptive in dynamic environments without change detection. An experimental study is conducted based on the moving peaks problem to investigate the behavior of the proposed method. The performance of the proposed algorithm is also compared with a set of algorithms that are based on multipopulation methods from different research areas in the literature of evolutionary computation.

  14. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    DTIC Science & Technology

    2015-06-24

    AFRL-AFOSR-VA-TR-2015-0281 Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications Hans Mittelmann...2012 - March 2015 4. TITLE AND SUBTITLE Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications 5a...problems. The size 16 three-dimensional quadratic assignment problem Q3AP from wireless communications was solved using a sophisticated approach

  15. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  16. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  17. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  18. Group Search Optimizer for the Mobile Location Management Problem

    PubMed Central

    Wang, Dan; Xiong, Congcong; Huang, Wei

    2014-01-01

    We propose a diversity-guided group search optimizer-based approach for solving the location management problem in mobile computing. The location management problem, which is to find the optimal network configurations of management under the mobile computing environment, is considered here as an optimization problem. The proposed diversity-guided group search optimizer algorithm is realized with the aid of diversity operator, which helps alleviate the premature convergence problem of group search optimizer algorithm, a successful optimization algorithm inspired by the animal behavior. To address the location management problem, diversity-guided group search optimizer algorithm is exploited to optimize network configurations of management by minimizing the sum of location update cost and location paging cost. Experimental results illustrate the effectiveness of the proposed approach. PMID:25180199

  19. Shape optimization for contact problems based on isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Horn, Benjamin; Ulbrich, Stefan

    2016-08-01

    We consider the shape optimization for mechanical connectors. To avoid the gap between the representation in CAD systems and the finite element simulation used by mathematical optimization, we choose an isogeometric approach for the solution of the contact problem within the optimization method. This leads to a shape optimization problem governed by an elastic contact problem. We handle the contact conditions using the mortar method and solve the resulting contact problem with a semismooth Newton method. The optimization problem is nonconvex and nonsmooth due to the contact conditions. To reduce the number of simulations, we use a derivative based optimization method. With the adjoint approach the design derivatives can be calculated efficiently. The resulting optimization problem is solved with a modified Bundle Trust Region algorithm.

  20. On a Highly Nonlinear Self-Obstacle Optimal Control Problem

    SciTech Connect

    Di Donato, Daniela; Mugnai, Dimitri

    2015-10-15

    We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.

  1. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  2. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  3. Gerrymandering and Convexity

    ERIC Educational Resources Information Center

    Hodge, Jonathan K.; Marshall, Emily; Patterson, Geoff

    2010-01-01

    Convexity-based measures of shape compactness provide an effective way to identify irregularities in congressional district boundaries. A low convexity coefficient may suggest that a district has been gerrymandered, or it may simply reflect irregularities in the corresponding state boundary. Furthermore, the distribution of population within a…

  4. Enhanced ant colony optimization for multiscale problems

    NASA Astrophysics Data System (ADS)

    Hu, Nan; Fish, Jacob

    2016-03-01

    The present manuscript addresses the issue of computational complexity of optimizing nonlinear composite materials and structures at multiple scales. Several solutions are detailed to meet the enormous computational challenge of optimizing nonlinear structures at multiple scales including: (i) enhanced sampling procedure that provides superior performance of the well-known ant colony optimization algorithm, (ii) a mapping-based meshing of a representative volume element that unlike unstructured meshing permits sensitivity analysis on coarse meshes, and (iii) a multilevel optimization procedure that takes advantage of possible weak coupling of certain scales. We demonstrate the proposed optimization procedure on elastic and inelastic laminated plates involving three scales.

  5. Accurate quantification of local changes for carotid arteries in 3D ultrasound images using convex optimization-based deformable registration

    NASA Astrophysics Data System (ADS)

    Cheng, Jieyu; Qiu, Wu; Yuan, Jing; Fenster, Aaron; Chiu, Bernard

    2016-03-01

    Registration of longitudinally acquired 3D ultrasound (US) images plays an important role in monitoring and quantifying progression/regression of carotid atherosclerosis. We introduce an image-based non-rigid registration algorithm to align the baseline 3D carotid US with longitudinal images acquired over several follow-up time points. This algorithm minimizes the sum of absolute intensity differences (SAD) under a variational optical-flow perspective within a multi-scale optimization framework to capture local and global deformations. Outer wall and lumen were segmented manually on each image, and the performance of the registration algorithm was quantified by Dice similarity coefficient (DSC) and mean absolute distance (MAD) of the outer wall and lumen surfaces after registration. In this study, images for 5 subjects were registered initially by rigid registration, followed by the proposed algorithm. Mean DSC generated by the proposed algorithm was 79:3+/-3:8% for lumen and 85:9+/-4:0% for outer wall, compared to 73:9+/-3:4% and 84:7+/-3:2% generated by rigid registration. Mean MAD of 0:46+/-0:08mm and 0:52+/-0:13mm were generated for lumen and outer wall respectively by the proposed algorithm, compared to 0:55+/-0:08mm and 0:54+/-0:11mm generated by rigid registration. The mean registration time of our method per image pair was 143+/-23s.

  6. Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron

    SciTech Connect

    Margot, F.; Fukuda, K.; Liebling, T.

    1994-12-31

    We investigate the applicability of backtrack technique for solving the vertex enumeration problem and the face enumeration problem for a convex polyhedron given by a system of linear inequalities. We show that there is a linear-time backtrack algorithm for the face enumeration problem whose space complexity is polynomial in the input size, but the vertex enumeration problem requires a backtrack algorithm to solve a decision problem, called the restricted vertex problem, for each output, which is shown to be NP-complete. Some related NP-complete problems associated with a system of linear inequalities are also discussed, including the optimal vertex problems for polyhedra and arrangements of hyperplanes.

  7. Advances in dual algorithms and convex approximation methods

    NASA Technical Reports Server (NTRS)

    Smaoui, H.; Fleury, C.; Schmit, L. A.

    1988-01-01

    A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.

  8. Splitting Methods for Convex Clustering

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2016-01-01

    Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplemental materials available online. PMID:27087770

  9. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  10. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  11. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  12. Convex accelerated maximum entropy reconstruction.

    PubMed

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  14. A Block Coordinate Descent Method for Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

    DTIC Science & Technology

    2012-08-01

    relerr time relerr time 80 80 80 10 8.76e-005 4.39e-001 7.89e-005 8.64e-001 8.62e-005 8.19e-001 80 80 80 20 9.47e-005 1.26e+000 1.97e-004 1.45e+000...1.77e-004 1.21e+000 80 80 80 30 9.65e-005 2.83e+000 2.05e-004 2.13e+000 2.07e-004 1.95e+000 50 50 500 10 9.15e-005 1.27e+000 1.07e-004 1.91e+000 9.54e...completion; bold are bad or slow. Problem Setting APG-TC (prop’d) r = q APG-TC (prop’d) r = b1.25qc FaLRTC N1 N2 N3 q SR relerr time relerr time relerr time 80

  15. A unified approach via convexity for optimal energy decay rates of finite and infinite dimensional vibrating damped systems with applications to semi-discretized vibrating damped systems

    NASA Astrophysics Data System (ADS)

    Alabau-Boussouira, Fatiha

    The Liapunov method is celebrated for its strength to establish strong decay of solutions of damped equations. Extensions to infinite dimensional settings have been studied by several authors (see e.g. Haraux, 1991 [11], and Komornik and Zuazua, 1990 [17] and references therein). Results on optimal energy decay rates under general conditions of the feedback is far from being complete. The purpose of this paper is to show that general dissipative vibrating systems have structural properties due to dissipation. We present a general approach based on convexity arguments to establish sharp optimal or quasi-optimal upper energy decay rates for these systems, and on comparison principles based on the dissipation property, and interpolation inequalities (in the infinite dimensional case) for lower bounds of the energy. We stress the fact that this method works for finite as well as infinite dimensional vibrating systems and as well as for applications to semi-discretized nonlinear damped vibrating PDE's. A part of this approach has been introduced in Alabau-Boussouira (2004, 2005) [1,2]. In the present paper, we identify a new, simple and explicit criteria to select a class of nonlinear feedbacks, for which we prove a simplified explicit energy decay formula comparatively to the more general but also more complex formula we give in Alabau-Boussouira (2004, 2005) [1,2]. Moreover, we prove optimality of the decay rates for this class, in the finite dimensional case. This class includes a wide range of feedbacks, ranging from very weak nonlinear dissipation (exponentially decaying in a neighborhood of zero), to polynomial, or polynomial-logarithmic decaying feedbacks at the origin. In the infinite dimensional case, we establish a comparison principle on the energy of sufficiently smooth solutions through the dissipation relation. This principle relies on suitable interpolation inequalities. It allows us to give lower bounds for the energy of smooth initial data for the one

  16. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  17. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints.

    PubMed

    Liu, Ryan Wen; Shi, Lin; Yu, Simon Chun Ho; Xiong, Naixue; Wang, Defeng

    2017-03-03

    Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.

  18. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints

    PubMed Central

    Liu, Ryan Wen; Shi, Lin; Yu, Simon Chun Ho; Xiong, Naixue; Wang, Defeng

    2017-01-01

    Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments. PMID:28273827

  19. Approximating convex Pareto surfaces in multiobjective radiotherapy planning

    SciTech Connect

    Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.

    2006-09-15

    Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing.

  20. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  1. Eddy current-nulled convex optimized diffusion encoding (EN-CODE) for distortion-free diffusion tensor imaging with short echo times.

    PubMed

    Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B

    2017-04-25

    To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10(-5) ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Exact optimal solution for a class of dual control problems

    NASA Astrophysics Data System (ADS)

    Cao, Suping; Qian, Fucai; Wang, Xiaomei

    2016-07-01

    This paper considers a discrete-time stochastic optimal control problem for which only measurement equation is partially observed with unknown constant parameters taking value in a finite set of stochastic systems. Because of the fact that the cost-to-go function at each stage contains variance and the non-separability of the variance is so complicated that the dynamic programming cannot be successfully applied, the optimal solution has not been found. In this paper, a new approach to the optimal solution is proposed by embedding the original non-separable problem into a separable auxiliary problem. The theoretical condition on which the optimal solution of the original problem can be attained from a set of solutions of the auxiliary problem is established. In addition, the optimality of the interchanging algorithm is proved and the analytical solution of the optimal control is also obtained. The performance of this controller is illustrated with a simple example.

  3. A Modified BFGS Formula Using a Trust Region Model for Nonsmooth Convex Minimizations

    PubMed Central

    Cui, Zengru; Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie; Wang, Xiaoliang; Duan, Xiabin

    2015-01-01

    This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than using second-order information of the function, thus decreasing the workload and time involved in the computation. Under suitable conditions, the algorithm converges globally to an optimal solution. Numerical results show that this algorithm can successfully solve nonsmooth unconstrained convex problems. PMID:26501775

  4. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  5. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  6. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  7. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  8. Comparison of Optimal Design Methods in Inverse Problems.

    PubMed

    Banks, H T; Holm, Kathleen; Kappel, Franz

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].

  9. Alternative Solutions for Optimization Problems in Generalizability Theory.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    1992-01-01

    Presents solutions for the problem of maximizing the generalizability coefficient under a budget constraint. Shows that the Cauchy-Schwarz inequality can be applied to derive optimal continuous solutions for the number of conditions of each facet. Illustrates the formal similarity between optimization problems in survey sampling and…

  10. The Role of Intuition in the Solving of Optimization Problems

    ERIC Educational Resources Information Center

    Malaspina, Uldarico; Font, Vicenc

    2010-01-01

    This article presents the partial results obtained in the first stage of the research, which sought to answer the following questions: (a) What is the role of intuition in university students' solutions to optimization problems? (b) What is the role of rigor in university students' solutions to optimization problems? (c) How is the combination of…

  11. Alternative Solutions for Optimization Problems in Generalizability Theory.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    1992-01-01

    Presents solutions for the problem of maximizing the generalizability coefficient under a budget constraint. Shows that the Cauchy-Schwarz inequality can be applied to derive optimal continuous solutions for the number of conditions of each facet. Illustrates the formal similarity between optimization problems in survey sampling and…

  12. The Role of Intuition in the Solving of Optimization Problems

    ERIC Educational Resources Information Center

    Malaspina, Uldarico; Font, Vicenc

    2010-01-01

    This article presents the partial results obtained in the first stage of the research, which sought to answer the following questions: (a) What is the role of intuition in university students' solutions to optimization problems? (b) What is the role of rigor in university students' solutions to optimization problems? (c) How is the combination of…

  13. Stereotype locally convex spaces

    NASA Astrophysics Data System (ADS)

    Akbarov, S. S.

    2000-08-01

    We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.

  14. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  15. Neural networks for convex hull computation.

    PubMed

    Leung, Y; Zhang, J S; Xu, Z B

    1997-01-01

    Computing convex hull is one of the central problems in various applications of computational geometry. In this paper, a convex hull computing neural network (CHCNN) is developed to solve the related problems in the N-dimensional spaces. The algorithm is based on a two-layered neural network, topologically similar to ART, with a newly developed adaptive training strategy called excited learning. The CHCNN provides a parallel online and real-time processing of data which, after training, yields two closely related approximations, one from within and one from outside, of the desired convex hull. It is shown that accuracy of the approximate convex hulls obtained is around O[K(-1)(N-1/)], where K is the number of neurons in the output layer of the CHCNN. When K is taken to be sufficiently large, the CHCNN can generate any accurate approximate convex hull. We also show that an upper bound exists such that the CHCNN will yield the precise convex hull when K is larger than or equal to this bound. A series of simulations and applications is provided to demonstrate the feasibility, effectiveness, and high efficiency of the proposed algorithm.

  16. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  17. Solving Optimization Problems with Dynamic Geometry Software: The Airport Problem

    ERIC Educational Resources Information Center

    Contreras, José

    2014-01-01

    This paper describes how the author's students (in-service and pre-service secondary mathematics teachers) enrolled in college geometry courses use the Geometers' Sketchpad (GSP) to gain insight to formulate, confirm, test, and refine conjectures to solve the classical airport problem for triangles. The students are then provided with strategic…

  18. Solving Optimization Problems with Dynamic Geometry Software: The Airport Problem

    ERIC Educational Resources Information Center

    Contreras, José

    2014-01-01

    This paper describes how the author's students (in-service and pre-service secondary mathematics teachers) enrolled in college geometry courses use the Geometers' Sketchpad (GSP) to gain insight to formulate, confirm, test, and refine conjectures to solve the classical airport problem for triangles. The students are then provided with strategic…

  19. Exact and Approximate Sizes of Convex Datacubes

    NASA Astrophysics Data System (ADS)

    Nedjar, Sébastien

    In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.

  20. Solving bi-objective optimal control problems with rectangular framing

    NASA Astrophysics Data System (ADS)

    Wijaya, Karunia Putra; Götz, Thomas

    2016-06-01

    Optimization problems, e.g. arising from epidemiology models, often ask for solutions minimizing multi-criteria objective functions. In this paper we discuss a novel approach for solving bi-objective optimal control problems. The set of non-dominated points is constructed via a decreasing sequence of rectangles. Particular attention is paid to a problem with disconnected set of non-dominated points. Several examples from epidemiology are investigated and show the applicability of the method.

  1. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  2. Conditions on optimal support recovery in unmixing problems by means of multi-penalty regularization

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus; Naumova, Valeriya

    2016-10-01

    Inspired by several real-life applications in audio processing and medical image analysis, where the quantity of interest is generated by several sources to be accurately modeled and separated, as well as by recent advances in regularization theory and optimization, we study the conditions on optimal support recovery in inverse problems of unmixing type by means of multi-penalty regularization. We consider and analyze a regularization functional composed of a data-fidelity term, where signal and noise are additively mixed, a non-smooth, convex, sparsity promoting term, and a quadratic penalty term to model the noise. We prove not only that the well-established theory for sparse recovery in the single parameter case can be translated to the multi-penalty settings, but we also demonstrate the enhanced properties of multi-penalty regularization in terms of support identification compared to sole {{\\ell }}1-minimization. We additionally confirm and support the theoretical results by extensive numerical simulations, which give a statistics of robustness of the multi-penalty regularization scheme with respect to the single-parameter counterpart. Eventually, we confirm a significant improvement in performance compared to standard {{\\ell }}1-regularization for compressive sensing problems considered in our experiments.

  3. A Convex Geometry-Based Blind Source Separation Method for Separating Nonnegative Sources.

    PubMed

    Yang, Zuyuan; Xiang, Yong; Rong, Yue; Xie, Kan

    2015-08-01

    This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

  4. Sensitivity analysis for electromagnetic topology optimization problems

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Li, Wei; Li, Qing

    2010-06-01

    This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.

  5. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality

  6. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    SciTech Connect

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  7. A Planning Problem Combining Calculus of Variations and Optimal Transport

    SciTech Connect

    Carlier, G. Lachapelle, A.

    2011-02-15

    We consider some variants of the classical optimal transport where not only one optimizes over couplings between some variables x and y but also over some control variables governing the evolutions of these variables with time. Such a situation is motivated by an assignment problem of tasks with workers whose characteristics can evolve with time (and be controlled). We distinguish between the coupled and decoupled case. The coupled case is a standard optimal transport with the value of some optimal control problem as cost. The decoupled case is more involved since it is nonlinear in the transport plan.

  8. Spectral finite-element methods for parametric constrained optimization problems.

    SciTech Connect

    Anitescu, M.; Mathematics and Computer Science

    2009-01-01

    We present a method to approximate the solution mapping of parametric constrained optimization problems. The approximation, which is of the spectral finite element type, is represented as a linear combination of orthogonal polynomials. Its coefficients are determined by solving an appropriate finite-dimensional constrained optimization problem. We show that, under certain conditions, the latter problem is solvable because it is feasible for a sufficiently large degree of the polynomial approximation and has an objective function with bounded level sets. In addition, the solutions of the finite-dimensional problems converge for an increasing degree of the polynomials considered, provided that the solutions exhibit a sufficiently large and uniform degree of smoothness. Our approach solves, in the case of optimization problems with uncertain parameters, the most computationally intensive part of stochastic finite-element approaches. We demonstrate that our framework is applicable to parametric eigenvalue problems.

  9. Applications of parallel global optimization to mechanics problems

    NASA Astrophysics Data System (ADS)

    Schutte, Jaco Francois

    Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.

  10. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  11. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  12. Group Testing: Four Student Solutions to a Classic Optimization Problem

    ERIC Educational Resources Information Center

    Teague, Daniel

    2006-01-01

    This article describes several creative solutions developed by calculus and modeling students to the classic optimization problem of testing in groups to find a small number of individuals who test positive in a large population.

  13. Optimal control problem for impulsive systems with integral boundary conditions

    NASA Astrophysics Data System (ADS)

    Ashyralyev, Allaberen; Sharifov, Y. A.

    2012-08-01

    In the present work the optimal control problem is considered, when the state of the system is described by the impulsive differential equations with integral boundary conditions. Applying the Banach contraction principle the existence and uniqueness of solution is proved for the corresponding boundary problem by the fixed admissible control. The first and second variation of the functional is calculated. Various necessary conditions of optimality of the first and second order are obtained by the help of the variation of the controls.

  14. A Decision Support System for Solving Multiple Criteria Optimization Problems

    ERIC Educational Resources Information Center

    Filatovas, Ernestas; Kurasova, Olga

    2011-01-01

    In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…

  15. Modelling, Transformations, and Scaling Decisions in Constrained Optimization Problems

    DTIC Science & Technology

    1976-03-01

    goal programming, linear fractional programming, mathematical programming, nonlinear programming, nonlinear optimization , transformations, scaling... nonlinear optimization problems. A discussion is given of separable programming, goal programming, and linear fractional DD i j°N 73 1473 EDITION OF 1... nonlinear programming codes. The sensitivity of the GRG code to scaling, rotation of coordinates, and translation of variables is examined. Trans

  16. Test problem construction for single-objective bilevel optimization.

    PubMed

    Sinha, Ankur; Malo, Pekka; Deb, Kalyanmoy

    2014-01-01

    In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of 12 problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The code related to the paper may be accessed from the website http://bilevel.org .

  17. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  18. Optimization based inversion method for the inverse heat conduction problems

    NASA Astrophysics Data System (ADS)

    Mu, Huaiping; Li, Jingtao; Wang, Xueyao; Liu, Shi

    2017-05-01

    Precise estimation of the thermal physical properties of materials, boundary conditions, heat flux distributions, heat sources and initial conditions is highly desired for real-world applications. The inverse heat conduction problem (IHCP) analysis method provides an alternative approach for acquiring such parameters. The effectiveness of the inversion algorithm plays an important role in practical applications of the IHCP method. Different from traditional inversion models, in this paper a new inversion model that simultaneously highlights the measurement errors and the inaccurate properties of the forward problem is proposed to improve the inversion accuracy and robustness. A generalized cost function is constructed to convert the original IHCP into an optimization problem. An iterative scheme that splits a complicated optimization problem into several simpler sub-problems and integrates the superiorities of the alternative optimization method and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is developed for solving the proposed cost function. Numerical experiment results validate the effectiveness of the proposed inversion method.

  19. On reductibility of degenerate optimization problems to regular operator equations

    NASA Astrophysics Data System (ADS)

    Bednarczuk, E. M.; Tretyakov, A. A.

    2016-12-01

    We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.

  20. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  1. The synthesis of optimal controls for linear, time-optimal problems with retarded controls.

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Jacobs, M. Q.; Latina, M. R.

    1971-01-01

    Optimization problems involving linear systems with retardations in the controls are studied in a systematic way. Some physical motivation for the problems is discussed. The topics covered are: controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming. A number of solved examples are presented.

  2. Left ventricle segmentation in MRI via convex relaxed distribution matching.

    PubMed

    Nambakhsh, Cyrus M S; Yuan, Jing; Punithakumar, Kumaradevan; Goela, Aashish; Rajchl, Martin; Peters, Terry M; Ayed, Ismail Ben

    2013-12-01

    A fundamental step in the diagnosis of cardiovascular diseases, automatic left ventricle (LV) segmentation in cardiac magnetic resonance images (MRIs) is still acknowledged to be a difficult problem. Most of the existing algorithms require either extensive training or intensive user inputs. This study investigates fast detection of the left ventricle (LV) endo- and epicardium surfaces in cardiac MRI via convex relaxation and distribution matching. The algorithm requires a single subject for training and a very simple user input, which amounts to a single point (mouse click) per target region (cavity or myocardium). It seeks cavity and myocardium regions within each 3D phase by optimizing two functionals, each containing two distribution-matching constraints: (1) a distance-based shape prior and (2) an intensity prior. Based on a global measure of similarity between distributions, the shape prior is intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive a fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed algorithm relaxes the need for costly pose estimation (or registration) procedures and large training sets, and can tolerate shape deformations, unlike template (or atlas) based priors. Our formulation leads to a challenging problem, which is not directly amenable to convex-optimization techniques. For each functional, we split the problem into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Unlike related graph-cut approaches, the proposed convex-relaxation solution can be parallelized to reduce substantially the computational time for 3D domains (or higher), extends directly to high dimensions, and does not have the grid-bias problem. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm

  3. The expanded invasive weed optimization metaheuristic for solving continuous and discrete optimization problems.

    PubMed

    Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.

  4. Examining the Bernstein global optimization approach to optimal power flow problem

    NASA Astrophysics Data System (ADS)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  5. The Expanded Invasive Weed Optimization Metaheuristic for Solving Continuous and Discrete Optimization Problems

    PubMed Central

    Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature. PMID:24955420

  6. Solving inverse problems of identification type by optimal control methods

    SciTech Connect

    Lenhart, S.; Protopopescu, V.; Jiongmin Yong

    1997-06-01

    Inverse problems of identification type for nonlinear equations are considered within the framework of optimal control theory. The rigorous solution of any particular problem depends on the functional setting, type of equation, and unknown quantity (or quantities) to be determined. Here the authors present only the general articulations of the formalism. Compared to classical regularization methods (e.g. Tikhonov coupled with optimization schemes), their approach presents several advantages, namely: (i) a systematic procedure to solve inverse problems of identification type; (ii) an explicit expression for the approximations of the solution; and (iii) a convenient numerical solution of these approximations.

  7. Numerical methods for solving terminal optimal control problems

    NASA Astrophysics Data System (ADS)

    Gornov, A. Yu.; Tyatyushkin, A. I.; Finkelstein, E. A.

    2016-02-01

    Numerical methods for solving optimal control problems with equality constraints at the right end of the trajectory are discussed. Algorithms for optimal control search are proposed that are based on the multimethod technique for finding an approximate solution of prescribed accuracy that satisfies terminal conditions. High accuracy is achieved by applying a second-order method analogous to Newton's method or Bellman's quasilinearization method. In the solution of problems with direct control constraints, the variation of the control is computed using a finite-dimensional approximation of an auxiliary problem, which is solved by applying linear programming methods.

  8. Chemical reaction optimization for solving shortest common supersequence problem.

    PubMed

    Khaled Saifullah, C M; Rafiqul Islam, Md

    2016-10-01

    Shortest common supersequence (SCS) is a classical NP-hard problem, where a string to be constructed that is the supersequence of a given string set. The SCS problem has an enormous application of data compression, query optimization in the database and different bioinformatics activities. Due to NP-hardness, the exact algorithms fail to compute SCS for larger instances. Many heuristics and meta-heuristics approaches were proposed to solve this problem. In this paper, we propose a meta-heuristics approach based on chemical reaction optimization, CRO_SCS that is designed inspired by the nature of the chemical reactions. For different optimization problems like 0-1 knapsack, quadratic assignment, global numeric optimization problems CRO algorithm shows very good performance. We have redesigned the reaction operators and a new reform function to solve the SCS problem. The outcomes of the proposed CRO_SCS algorithm are compared with those of the enhanced beam search (IBS_SCS), deposition and reduction (DR), ant colony optimization (ACO) and artificial bee colony (ABC) algorithms. The length of supersequence, execution time and standard deviation of all related algorithms show that CRO_SCS gives better results on the average than all other algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. One algorithm for branch and bound method for solving concave optimization problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.; Korepanova, A. A.; Halilova, I. F.

    2016-11-01

    The article describes the algorithm for branch and bound method for solving the concave programming problem, which is based on the idea of similarity the necessary and sufficient conditions of optimum for the original problem and for a convex programming problem with another feasible set and reverse the sign of the objective function. To find the feasible set of the equivalent convex programming problem we construct an algorithm using the idea of the branch and bound method. We formulate various branching techniques and discusses the construction of the lower objective function evaluations for the node of the decision tree. The article discusses the results of experiments of this algorithm for some famous test problems of a particular form.

  10. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  11. Optimal vehicle planning and the search tour problem

    NASA Astrophysics Data System (ADS)

    Wettergren, Thomas A.; Bays, Matthew J.

    2016-05-01

    We describe a problem of optimal planning for unmanned vehicles and illustrate two distinct procedures for its solution. The problem under consideration, which we refer to as the search tour problem, involves the determination of multi-stage plans for unmanned vehicles conducting search operations. These types of problems are important in situations where the searcher has varying performance in different regions throughout the domain due to environmental complexity. The ability to provide robust planning for unmanned systems under difficult environmental conditions is critical for their use in search operations. The problem we consider consists of searches with variable times for each of the stages, as well as an additional degree of freedom for each stage to select from one of a finite set of operational configurations. As each combination of configuration and stage time leads to a different performance level, there is a need to determine the optimal configuration of these stages. When the complexity of constraints on total time, as well as resources expended at each stage for a given configuration, are added, the problem becomes one of non-trivial search effort allocation and numerical methods of optimization are required. We show two solution approaches for this numerical optimization problem. The first solution technique is to use a mixed-integer linear programming formulation, for which commercially available solvers can find optimal solutions in a reasonable amount of time. We use this solution as a baseline and compare against a new inner/outer optimization formulation. This inner/outer optimization compares favorably to the baseline solution, but is also amenable to adaptation as the search operation progresses. Numerical examples illustrate the utility of the approach for unmanned vehicle search planning.

  12. TSP based Evolutionary optimization approach for the Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kouki, Zoulel; Chaar, Besma Fayech; Ksouri, Mekki

    2009-03-01

    Vehicle Routing and Flexible Job Shop Scheduling Problems (VRP and FJSSP) are two common hard combinatorial optimization problems that show many similarities in their conceptual level [2, 4]. It was proved for both problems that solving techniques like exact methods fail to provide good quality solutions in a reasonable amount of time when dealing with large scale instances [1, 5, 14]. In order to overcome this weakness, we decide in the favour of meta heuristics and we focalize on evolutionary algorithms that have been successfully used in scheduling problems [1, 5, 9]. In this paper we investigate the common properties of the VRP and the FJSSP in order to provide a new controlled evolutionary approach for the CVRP optimization inspired by the FJSSP evolutionary optimization algorithms introduced in [10].

  13. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  14. A theorem for piecewise convex-concave data approximation

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2004-03-01

    We are given univariate data that include random errors. We consider the problem of calculating a best approximation to the data by minimizing a strictly convex function of the errors subject to the condition that there are at most q sign changes in the sequence of the second divided differences of the approximation, where q is a prescribed integer. There are about O(nq) combinations of positions of sign changes, which make an exhaustive approach prohibitively expensive. However, Demetriou and Powell (Approximation Theory and Optimization, Cambridge University Press, Cambridge, 1997, pp. 109-132), have proved the remarkable property that there exists a partitioning of the data into (q+1) disjoint subsets such that the approximation may be calculated by a separate convex programming calculation on each subset. Based on this result, we provide a characterization theorem that reduces the problem to an equivalent one, where the unknowns are the positions of the sign changes subject to feasibility restrictions at the sign changes. Furthermore, we present counterexamples on two conjectures that investigate whether the search for optimal sign changes may be restricted to certain subranges of the data.

  15. Sub-problem Optimization With Regression and Neural Network Approximators

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Hopkins, Dale A.; Patnaik, Surya N.

    2003-01-01

    Design optimization of large systems can be attempted through a sub-problem strategy. In this strategy, the original problem is divided into a number of smaller problems that are clustered together to obtain a sequence of sub-problems. Solution to the large problem is attempted iteratively through repeated solutions to the modest sub-problems. This strategy is applicable to structures and to multidisciplinary systems. For structures, clustering the substructures generates the sequence of sub-problems. For a multidisciplinary system, individual disciplines, accounting for coupling, can be considered as sub-problems. A sub-problem, if required, can be further broken down to accommodate sub-disciplines. The sub-problem strategy is being implemented into the NASA design optimization test bed, referred to as "CometBoards." Neural network and regression approximators are employed for reanalysis and sensitivity analysis calculations at the sub-problem level. The strategy has been implemented in sequential as well as parallel computational environments. This strategy, which attempts to alleviate algorithmic and reanalysis deficiencies, has the potential to become a powerful design tool. However, several issues have to be addressed before its full potential can be harnessed. This paper illustrates the strategy and addresses some issues.

  16. A Convex Approach to Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.

  17. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  18. Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.

  19. Optimization problems with quasi-variational inequality constraints

    SciTech Connect

    Outrata, J.

    1994-12-31

    The main aim of the contribution is to propose a numerical method for the optimization problems with parameter-dependent Quasi-Variational Inequalities (QVI) or Implicit Complementarity Problems (ICP) as side constraints. Thereby we confine ourselves to the simpler case in which the solutions of QVI (ICP) are unique (or at least locally unique) and depend on the parameter in a lipschitzian way. In the first part we state the problem and give some motivating examples coming from mechanics. The second part deals with the numerical solution of QVI (ICP) for fixed values of the parameter by a nonsmooth variant of the Newton method, which has shown a surprising effectiveness in the applications being considered. In particular, we show that the appropriate operators are semismooth and discuss the nonsingularity condition. The third part is devoted to our optimization problems which are cast in such a way that the bundle techniques from nonsmooth optimization can be applied. To compute the needed {open_quotes}subgradient{close_quotes} information, we characterize the maps, assigning to the single admissible values of the parameter the corresponding solution of the QVI, by generalized Jacobians. As a test example, the optimal covering problem from shape optimization is taken, in which the rigid obstacle is replaced by an elastic one.

  20. Russian Doll Search for solving Constraint Optimization problems

    SciTech Connect

    Verfaillie, G.; Lemaitre, M.

    1996-12-31

    If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.

  1. A coherent Ising machine for 2000-node optimization problems

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  2. Reliability optimization problems with multiple constraints under fuzziness

    NASA Astrophysics Data System (ADS)

    Gupta, Neha; Haseen, Sanam; Bari, Abdul

    2016-06-01

    In reliability optimization problems diverse situation occurs due to which it is not always possible to get relevant precision in system reliability. The imprecision in data can often be represented by triangular fuzzy numbers. In this manuscript, we have considered different fuzzy environment for reliability optimization problem of redundancy. We formulate a redundancy allocation problem for a hypothetical series-parallel system in which the parameters of the system are fuzzy. Two different cases are then formulated as non-linear programming problem and the fuzzy nature is defuzzified into crisp problems using three different defuzzification methods viz. ranking function, graded mean integration value and α-cut. The result of the methods is compared at the end of the manuscript using a numerical example.

  3. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  4. Convex weighting criteria for speaking rate estimation

    PubMed Central

    Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie

    2015-01-01

    Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516

  5. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  6. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  7. Generalized vector calculus on convex domain

    NASA Astrophysics Data System (ADS)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  8. Parallel algorithms for separation of two sets of points and recognition of digital convex polygons

    SciTech Connect

    Sarkar, D. ); Stojmenovic, I. )

    1992-04-01

    Given two finite sets of points in a plane, the polygon separation problem is to construct a separating convex k-gon with smallest k. In this paper, we present a parallel algorithm for the polygon separation problem. The algorithm runs in O(log n) time on a CREW PRAM with n processors, where n is the number of points in the two given sets. The algorithm is cost-optimal, since [Omega](n log n) is a lower-bound for the first time needed by any sequential algorithm. We apply this algorithm to the problem of finding a convex region is its digital image. The algorithm in this paper constructs one such polygon with possibly two more edges than the minimal one.

  9. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  10. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  11. Adjoint optimal control problems for the RANS system

    NASA Astrophysics Data System (ADS)

    Attavino, A.; Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.

    2017-01-01

    Adjoint optimal control in computational fluid dynamics has become increasingly popular recently because of its use in several engineering and research studies. However the optimal control of turbulent flows without the use of Direct Numerical Simulation is still an open problem and various methods have been proposed based on different approaches. In this work we study optimal control problems for a turbulent flow modeled with a Reynolds-Averaged Navier-Stokes system. The adjoint system is obtained through the use of a Lagrangian multiplier method by setting as objective of the control a velocity matching profile or an increase or decrease in the turbulent kinetic energy. The optimality system is solved with an in-house finite element code and numerical results are reported in order to show the validity of this approach.

  12. On the use of consistent approximations in the solution of semi-infinite optimization, optimal control, and shape optimization problems

    SciTech Connect

    Polak, E.

    1994-12-31

    Unlike the situation with most other problems, the concept of a solution to an optimization problem is not unique, since it includes global solutions, local solutions, and stationary points. Earlier definitions of a consistent approximation to an optimization problem were in terms of properties that ensured that the global minimizers of the approximating problems (as well as uniformly strict local minimizers) converge only to global minimizers (local minimizers) of the original problems. Our definition of a consistent approximation addresses the properties not only of global and local solutions of the approximating problems, but also of their stationary points. Hence we always consider a pair, consisting of an optimization problem and its optimality function, (P, {theta}), with the zeros of the optimality function being the stationary points of P. We define consistency of approximating problem-optimality function pairs, (P{sub N}, {theta}{sub N}) to (P, {theta}), in terms of the epigraphical convergence of the P{sub N} to P, and the hypographical convergence of the optimality functions {theta}{sub N} to {theta}. As a companion to the characterization of consistent approximations, we will present two types of {open_quotes}diagonalization{close_quotes} techniques for using consistent approximations and {open_quotes}hot starts{close_quotes} in obtaining an approximate solution of the original problems. The first is a {open_quotes}filter{close_quotes} type technique, similar to that used in conjunction with penalty functions, the second one is an adaptive discretization technique with nicer convergence properties. We will illustrate the use of our concept of consistent approximations with examples from semi-infinite optimization, optimal control, and shape optimization.

  13. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  14. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  15. Hybrid optimization schemes for simulation-based problems.

    SciTech Connect

    Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.

    2010-05-01

    The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.

  16. State-Constrained Optimal Control Problems of Impulsive Differential Equations

    SciTech Connect

    Forcadel, Nicolas; Rao Zhiping Zidani, Hasnaa

    2013-08-01

    The present paper studies an optimal control problem governed by measure driven differential systems and in presence of state constraints. The first result shows that using the graph completion of the measure, the optimal solutions can be obtained by solving a reparametrized control problem of absolutely continuous trajectories but with time-dependent state-constraints. The second result shows that it is possible to characterize the epigraph of the reparametrized value function by a Hamilton-Jacobi equation without assuming any controllability assumption.

  17. Comparing Extremal and Hysteretic Optimization on the Satisfiability Problem

    NASA Astrophysics Data System (ADS)

    Gonçalves, Bruno; Boettcher, Stefan

    2006-03-01

    We apply physically inspired optimization methods to the classical combinatorial Satisfiablity problem. Treating the usual boolean variables as Ising spins and each clause as a p-spin interaction we can use the pre-existing physical intuition about spin glasses and magnetic systems to find the optimal solution for this problem (the ground state energy). We compare the performance of Extremal OptimizationootnotetextPRL 23:5211, 2001 (τEO) and Hysteretic OptimizationootnotetextPRL 89:150201, 2002 (HO) and determine the parameter values that provide the best results. Comparisons with previously published results on well known benchmarksootnotetextDIMACS 35:393, 1997 are also made.

  18. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of

  19. A Discrete Lagrangian Algorithm for Optimal Routing Problems

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    The ideas of discrete Lagrangian methods for conservative systems are exploited for the construction of algorithms applicable in optimal ship routing problems. The algorithm presented here is based on the discretisation of Hamilton's principle of stationary action Lagrangian and specifically on the direct discretization of the Lagrange-Hamilton principle for a conservative system. Since, in contrast to the differential equations, the discrete Euler-Lagrange equations serve as constrains for the optimization of a given cost functional, in the present work we utilize this feature in order to minimize the cost function for optimal ship routing.

  20. Optimality problem of network topology in stocks market analysis

    NASA Astrophysics Data System (ADS)

    Djauhari, Maman Abdurachman; Gan, Siew Lee

    2015-02-01

    Since its introduction fifteen years ago, minimal spanning tree has become an indispensible tool in econophysics. It is to filter the important economic information contained in a complex system of financial markets' commodities. Here we show that, in general, that tool is not optimal in terms of topological properties. Consequently, the economic interpretation of the filtered information might be misleading. To overcome that non-optimality problem, a set of criteria and a selection procedure of an optimal minimal spanning tree will be developed. By using New York Stock Exchange data, the advantages of the proposed method will be illustrated in terms of the power-law of degree distribution.

  1. Application of clustering global optimization to thin film design problems.

    PubMed

    Lemarchand, Fabien

    2014-03-10

    Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.

  2. Curvature morphology of the mandibular dentition and the development of concave-convex vertical stripping instruments.

    PubMed

    Ihlow, Dankmar; Kubein-Meesenburg, Dietmar; Hunze, Justus; Dathe, Henning; Planert, Jens; Schwestka-Polly, Rainer; Nägerl, Hans

    2002-07-01

    Radii for concave-convex vertical stripping instruments can be derived from measurements of the natural curvature morphology in the horizontal contact area of the mandibular dentition. The concave-convex adjustment of contacts in the anterior dental arch with a newly developed set of concave-convex stripping instruments should enable orthodontic crowding problems to be alleviated biomechanically.

  3. The expanded LaGrangian system for constrained optimization problems

    NASA Technical Reports Server (NTRS)

    Poore, A. B.

    1986-01-01

    Smooth penalty functions can be combined with numerical continuation/bifurcation techniques to produce a class of robust and fast algorithms for constrainted optimization problems. The key to the development of these algorithms is the Expanded Lagrangian System which is derived and analyzed in this work. This parameterized system of nonlinear equations contains the penalty path as a solution, provides a smooth homotopy into the first-order necessary conditions, and yields a global optimization technique. Furthermore, the inevitable ill-conditioning present in a sequential optimization algorithm is removed for three penalty methods: the quadratic penalty function for equality constraints, and the logarithmic barrier function (an interior method) and the quadratic loss function (an interior method) for inequality constraints. Although these techniques apply to optimization in general and to linear and nonlinear programming, calculus of variations, optimal control and parameter identification in particular, the development is primarily within the context of nonlinear programming.

  4. Artificial bee colony algorithm for solving optimal power flow problem.

    PubMed

    Le Dinh, Luong; Vo Ngoc, Dieu; Vasant, Pandian

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem.

  5. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  6. Optimizing Value and Avoiding Problems in Building Schools.

    ERIC Educational Resources Information Center

    Brevard County School Board, Cocoa, FL.

    This report describes school design and construction delivery processes used by the School Board of Brevard County (Cocoa, Florida) that help optimize value, avoid problems, and eliminate the cost of maintaining a large facility staff. The project phases are examined from project definition through design to construction. Project delivery…

  7. A Generalized Orienteering Problem for Optimal Search and Interdiction Planning

    DTIC Science & Technology

    2013-09-01

    Contents 1 Introduction 1 1.1 Motivation and Background . . . . . . . . . . . . . . . . . . . . . 1 1.2 Literature Review...BLANK xvi Executive Summary This research is motived by the ongoing eorts of the Joint Interagency Task Force South (JIATFS), which conducts search...vehicle routing, sports, tourism , production, and scheduling. We present the Smuggler Search Problem (SSP), a novel path-constrained optimal search model

  8. To the optimization problem in minority game model

    SciTech Connect

    Yanishevsky, Vasyl

    2009-12-14

    The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.

  9. To the optimization problem in minority game model

    NASA Astrophysics Data System (ADS)

    Yanishevsky, Vasyl

    2009-12-01

    The article presents the research results of the optimization problem in minority game model to a gaussian approximation using replica symmetry breaking by one step (1RSB). A comparison to replica symmetry approximation (RS) and the results from literary sources received using other methods has been held.

  10. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    PubMed

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network.

  11. Optimizing investment fund allocation using vehicle routing problem framework

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-07-01

    The objective of investment is to maximize total returns or minimize total risks. To determine the optimum order of investment, vehicle routing problem method is used. The method which is widely used in the field of resource distribution shares almost similar characteristics with the problem of investment fund allocation. In this paper we describe and elucidate the concept of using vehicle routing problem framework in optimizing the allocation of investment fund. To better illustrate these similarities, sectorial data from FTSE Bursa Malaysia is used. Results show that different values of utility for risk-averse investors generate the same investment routes.

  12. Rees algebras, Monomial Subrings and Linear Optimization Problems

    NASA Astrophysics Data System (ADS)

    Dupont, Luis A.

    2010-06-01

    In this thesis we are interested in studying algebraic properties of monomial algebras, that can be linked to combinatorial structures, such as graphs and clutters, and to optimization problems. A goal here is to establish bridges between commutative algebra, combinatorics and optimization. We study the normality and the Gorenstein property-as well as the canonical module and the a-invariant-of Rees algebras and subrings arising from linear optimization problems. In particular, we study algebraic properties of edge ideals and algebras associated to uniform clutters with the max-flow min-cut property or the packing property. We also study algebraic properties of symbolic Rees algebras of edge ideals of graphs, edge ideals of clique clutters of comparability graphs, and Stanley-Reisner rings.

  13. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  14. People efficiently explore the solution space of the computationally intractable traveling salesman problem to find near-optimal tours.

    PubMed

    Acuña, Daniel E; Parada, Víctor

    2010-07-29

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution ("good" edges) were significantly more likely to stay than other edges ("bad" edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants "ran out of ideas." In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics.

  15. People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours

    PubMed Central

    Acuña, Daniel E.; Parada, Víctor

    2010-01-01

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics. PMID:20686597

  16. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  17. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  18. Boundary condition optimal control problem in lava flow modelling

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, Alik; Korotkii, Alexander; Tsepelev, Igor; Kovtunov, Dmitry; Melnik, Oleg

    2016-04-01

    We study a problem of steady-state fluid flow with known thermal conditions (e.g., measured temperature and the heat flux at the surface of lava flow) at one segment of the model boundary and unknown conditions at its another segment. This problem belongs to a class of boundary condition optimal control problems and can be solved by data assimilation from one boundary to another using direct and adjoint models. We derive analytically the adjoint model and test the cost function and its gradient, which minimize the misfit between the known thermal condition and its model counterpart. Using optimization algorithms, we iterate between the direct and adjoint problems and determine the missing boundary condition as well as thermal and dynamic characteristics of the fluid flow. The efficiency of optimization algorithms - Polak-Ribiere conjugate gradient and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithms - have been tested with the aim to get a rapid convergence to the solution of this inverse ill-posed problem. Numerical results show that temperature and velocity can be determined with a high accuracy in the case of smooth input data. A noise imposed on the input data results in a less accurate solution, but still acceptable below some noise level.

  19. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  20. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  1. A free boundary approach to shape optimization problems.

    PubMed

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint.

  2. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  3. Convex recoloring as an evolutionary marker.

    PubMed

    Frenkel, Zeev; Kiat, Yosef; Izhaki, Ido; Snir, Sagi

    2017-02-01

    With the availability of enormous quantities of genetic data it has become common to construct very accurate trees describing the evolutionary history of the species under study, as well as every single gene of these species. These trees allow us to examine the evolutionary compliance of given markers (characters). A marker compliant with the history of the species investigated, has undergone mutations along the species tree branches, such that every subtree of that tree exhibits a different state. Convex recoloring (CR) uses combinatorial representation to measure the adequacy of a taxonomic classifier to a given tree. Despite its biological origins, research on CR has been almost exclusively dedicated to mathematical properties of the problem, or variants of it with little, if any, relationship to taxonomy. In this work we return to the origins of CR. We put CR in a statistical framework and introduce and learn the notion of the statistical significance of a character. We apply this measure to two data sets - Passerine birds and prokaryotes, and four examples. These examples demonstrate various applications of CR, from evolutionary relatedness, through lateral evolution, to supertree construction. The above study was done with a new software that we provide, containing algorithmic improvement with a graphical output of a (optimally) recolored tree.

  4. Personality, Preventive Health Behaviour and Comparative Optimism about Health Problems.

    PubMed

    Ingledew, D K; Brunning, S

    1999-03-01

    The aim was to test a model whereby personality influences preventive health behaviour which in turn influences comparative optimism about possible future health problems. Students (N 5 150) completed measures of personality (five-factor), preventive health behaviour and comparative optimism. The model was tested using structural equation modelling with observed variables. In the final model, agreeableness and conscientiousness had positive main effects and an interactive effect upon preventive health behaviour. Preventive health behaviour had a positive effect upon comparative optimism. In addition, extraversion had a direct positive effect (not mediated by preventive health behaviour) upon comparative optimism. It is speculated that agreeableness and conscientiousness combine to produce a general regard for social convention that is conducive to healthier behaviour. The effect of extraversion is explicable in terms of positive affectivity.

  5. A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems.

    PubMed

    Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui

    2010-02-01

    In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.

  6. Performance investigation of multigrid optimization for DNS-based optimal control problems

    NASA Astrophysics Data System (ADS)

    Nita, Cornelia; Vandewalle, Stefan; Meyers, Johan

    2016-11-01

    Optimal control theory in Direct Numerical Simulation (DNS) or Large-Eddy Simulation (LES) of turbulent flow involves large computational cost and memory overhead for the optimization of the controls. In this context, the minimization of the cost functional is typically achieved by employing gradient-based iterative methods such as quasi-Newton, truncated Newton or non-linear conjugate gradient. In the current work, we investigate the multigrid optimization strategy (MGOpt) in order to speed up the convergence of the damped L-BFGS algorithm for DNS-based optimal control problems. The method consists in a hierarchy of optimization problems defined on different representation levels aiming to reduce the computational resources associated with the cost functional improvement on the finest level. We examine the MGOpt efficiency for the optimization of an internal volume force distribution with the goal of reducing the turbulent kinetic energy or increasing the energy extraction in a turbulent wall-bounded flow; problems that are respectively related to drag reduction in boundary layers, or energy extraction in large wind farms. Results indicate that in some cases the multigrid optimization method requires up to a factor two less DNS and adjoint DNS than single-grid damped L-BFGS. The authors acknowledge support from OPTEC (OPTimization in Engineering Center of Excellence, KU Leuven, Grant No PFV/10/002).

  7. Solving Optimal Control Problems by Exploiting Inherent Dynamical Systems Structures

    NASA Astrophysics Data System (ADS)

    Flaßkamp, Kathrin; Ober-Blöbaum, Sina; Kobilarov, Marin

    2012-08-01

    Computing globally efficient solutions is a major challenge in optimal control of nonlinear dynamical systems. This work proposes a method combining local optimization and motion planning techniques based on exploiting inherent dynamical systems structures, such as symmetries and invariant manifolds. Prior to the optimal control, the dynamical system is analyzed for structural properties that can be used to compute pieces of trajectories that are stored in a motion planning library. In the context of mechanical systems, these motion planning candidates, termed primitives, are given by relative equilibria induced by symmetries and motions on stable or unstable manifolds of e.g. fixed points in the natural dynamics. The existence of controlled relative equilibria is studied through Lagrangian mechanics and symmetry reduction techniques. The proposed framework can be used to solve boundary value problems by performing a search in the space of sequences of motion primitives connected using optimized maneuvers. The optimal sequence can be used as an admissible initial guess for a post-optimization. The approach is illustrated by two numerical examples, the single and the double spherical pendula, which demonstrates its benefit compared to standard local optimization techniques.

  8. Computational and statistical tradeoffs via convex relaxation

    PubMed Central

    Chandrasekaran, Venkat; Jordan, Michael I.

    2013-01-01

    Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources, such as time or space. Our approach to this problem is to define a notion of “algorithmic weakening,” in which a hierarchy of algorithms is ordered by both computational efficiency and statistical efficiency, allowing the growing strength of the data at scale to be traded off against the need for sophisticated processing. We illustrate this approach in the setting of denoising problems, using convex relaxation as the core inferential tool. Hierarchies of convex relaxations have been widely used in theoretical computer science to yield tractable approximation algorithms to many computationally intractable tasks. In the current paper, we show how to endow such hierarchies with a statistical characterization and thereby obtain concrete tradeoffs relating algorithmic runtime to amount of data. PMID:23479655

  9. Block-oriented modeling of superstructure optimization problems

    SciTech Connect

    Friedman, Z; Ingalls, J; Siirola, JD; Watson, JP

    2013-10-15

    We present a novel software framework for modeling large-scale engineered systems as mathematical optimization problems. A key motivating feature in such systems is their hierarchical, highly structured topology. Existing mathematical optimization modeling environments do not facilitate the natural expression and manipulation of hierarchically structured systems. Rather, the modeler is forced to "flatten" the system description, hiding structure that may be exploited by solvers, and obfuscating the system that the modeling environment is attempting to represent. To correct this deficiency, we propose a Python-based "block-oriented" modeling approach for representing the discrete components within the system. Our approach is an extension of the Pyomo library for specifying mathematical optimization problems. Through the use of a modeling components library, the block-oriented approach facilitates a clean separation of system superstructure from the details of individual components. This approach also naturally lends itself to expressing design and operational decisions as disjunctive expressions over the component blocks. By expressing a mathematical optimization problem in a block-oriented manner, inherent structure (e.g., multiple scenarios) is preserved for potential exploitation by solvers. In particular, we show that block-structured mathematical optimization problems can be straightforwardly manipulated by decomposition-based multi-scenario algorithmic strategies, specifically in the context of the PySP stochastic programming library. We illustrate our block-oriented modeling approach using a case study drawn from the electricity grid operations domain: unit commitment with transmission switching and N - 1 reliability constraints. Finally, we demonstrate that the overhead associated with block-oriented modeling only minimally increases model instantiation times, and need not adversely impact solver behavior. (C) 2013 Elsevier Ltd. All rights reserved.

  10. Optimization Algorithms and Equilibrium Analysis for Dynamic Resource Allocation

    DTIC Science & Technology

    2012-01-31

    to derive necessary and sufficient conditions for many desirable properties of a prediction market mechanism such as proper scoring, truthful...set can be non - convex or non -connected. Our method is based on approximating a quadratic social utility optimization problem (QP) and showing that...In [2], we present a convex optimization framework that unifies these seemingly unrelated models for centrally organizing contingent claims

  11. A-optimal encoding weights for nonlinear inverse problems, with application to the Helmholtz inverse problem

    NASA Astrophysics Data System (ADS)

    Crestel, Benjamin; Alexanderian, Alen; Stadler, Georg; Ghattas, Omar

    2017-07-01

    The computational cost of solving an inverse problem governed by PDEs, using multiple experiments, increases linearly with the number of experiments. A recently proposed method to decrease this cost uses only a small number of random linear combinations of all experiments for solving the inverse problem. This approach applies to inverse problems where the PDE solution depends linearly on the right-hand side function that models the experiment. As this method is stochastic in essence, the quality of the obtained reconstructions can vary, in particular when only a small number of combinations are used. We develop a Bayesian formulation for the definition and computation of encoding weights that lead to a parameter reconstruction with the least uncertainty. We call these weights A-optimal encoding weights. Our framework applies to inverse problems where the governing PDE is nonlinear with respect to the inversion parameter field. We formulate the problem in infinite dimensions and follow the optimize-then-discretize approach, devoting special attention to the discretization and the choice of numerical methods in order to achieve a computational cost that is independent of the parameter discretization. We elaborate our method for a Helmholtz inverse problem, and derive the adjoint-based expressions for the gradient of the objective function of the optimization problem for finding the A-optimal encoding weights. The proposed method is potentially attractive for real-time monitoring applications, where one can invest the effort to compute optimal weights offline, to later solve an inverse problem repeatedly, over time, at a fraction of the initial cost.

  12. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  13. Combinatorial optimization problem solution based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Peng

    2017-08-01

    Traveling salesman problem (TSP) is a classic combinatorial optimization problem. It is a simplified form of many complex problems. In the process of study and research, it is understood that the parameters that affect the performance of genetic algorithm mainly include the quality of initial population, the population size, and crossover probability and mutation probability values. As a result, an improved genetic algorithm for solving TSP problems is put forward. The population is graded according to individual similarity, and different operations are performed to different levels of individuals. In addition, elitist retention strategy is adopted at each level, and the crossover operator and mutation operator are improved. Several experiments are designed to verify the feasibility of the algorithm. Through the experimental results analysis, it is proved that the improved algorithm can improve the accuracy and efficiency of the solution.

  14. Analysis of a turning point problem in flight trajectory optimization

    NASA Technical Reports Server (NTRS)

    Gracey, C.

    1989-01-01

    The optimal control policy for the aeroglide portion of the minimum fuel, orbital plane change problem for maneuvering entry vehicles is reduced to the solution of a turning point problem for the bank angle control. For this problem a turning point occurs at the minimum altitude of the flight, when the flight path angle equals zero. The turning point separates the bank angle control into two outer solutions that are valid away from the turning point. In a neighborhood of the turning point, where the bank angle changes rapidly, an inner solution is developed and matched with the two outer solutions. An asymptotic analysis of the turning point problem is given, and an analytic example is provided to illustrate the construction of the bank angle control.

  15. Leader selection problem for stochastically forced consensus networks based on matrix differentiation

    NASA Astrophysics Data System (ADS)

    Gao, Leitao; Zhao, Guangshe; Li, Guoqi; Yang, Zhaoxu

    2017-03-01

    The leader selection problem refers to determining a predefined number of agents as leaders in order to minimize the mean-square deviation from consensus in stochastically forced networks. The original leader selection problem is formulated as a non-convex optimization problem where matrix variables are involved. By relaxing the constraints, a convex optimization model can be obtained. By introducing a chain rule of matrix differentiation, we can obtain the gradient of the cost function which consists matrix variables. We develop a "revisited projected gradient method" (RPGM) and a "probabilistic projected gradient method" (PPGM) to solve the two formulated convex and non-convex optimization problems, respectively. The convergence property of both methods is established. For convex optimization model, the global optimal solution can be achieved by RPGM, while for the original non-convex optimization model, a suboptimal solution is achieved by PPGM. Simulation results ranging from the synthetic to real-life networks are provided to show the effectiveness of RPGM and PPGM. This works will deepen the understanding of leader selection problems and enable applications in various real-life distributed control problems.

  16. Convex bodies of states and maps

    NASA Astrophysics Data System (ADS)

    Grabowski, Janusz; Ibort, Alberto; Kuś, Marek; Marmo, Giuseppe

    2013-10-01

    We give a general solution to the question of when the convex hulls of orbits of quantum states on a finite-dimensional Hilbert space under unitary actions of a compact group have a non-empty interior in the surrounding space of all density operators. The same approach can be applied to study convex combinations of quantum channels. The importance of both problems stems from the fact that, usually, only sets with non-vanishing volumes in the embedding spaces of all states or channels are of practical importance. For the group of local transformations on a bipartite system we characterize maximally entangled states by the properties of a convex hull of orbits through them. We also compare two partial characteristics of convex bodies in terms of the largest balls and maximum volume ellipsoids contained in them and show that, in general, they do not coincide. Separable states, mixed-unitary channels and k-entangled states are also considered as examples of our techniques.

  17. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  18. An optimized spectral difference scheme for CAA problems

    NASA Astrophysics Data System (ADS)

    Gao, Junhui; Yang, Zhigang; Li, Xiaodong

    2012-05-01

    In the implementation of spectral difference (SD) method, the conserved variables at the flux points are calculated from the solution points using extrapolation or interpolation schemes. The errors incurred in using extrapolation and interpolation would result in instability. On the other hand, the difference between the left and right conserved variables at the edge interface will introduce dissipation to the SD method when applying a Riemann solver to compute the flux at the element interface. In this paper, an optimization of the extrapolation and interpolation schemes for the fourth order SD method on quadrilateral element is carried out in the wavenumber space through minimizing their dispersion error over a selected band of wavenumbers. The optimized coefficients of the extrapolation and interpolation are presented. And the dispersion error of the original and optimized schemes is plotted and compared. An improvement of the dispersion error over the resolvable wavenumber range of SD method is obtained. The stability of the optimized fourth order SD scheme is analyzed. It is found that the stability of the 4th order scheme with Chebyshev-Gauss-Lobatto flux points, which is originally weakly unstable, has been improved through the optimization. The weak instability is eliminated completely if an additional second order filter is applied on selected flux points. One and two dimensional linear wave propagation analyses are carried out for the optimized scheme. It is found that in the resolvable wavenumber range the new SD scheme is less dispersive and less dissipative than the original scheme, and the new scheme is less anisotropic for 2D wave propagation. The optimized SD solver is validated with four computational aeroacoustics (CAA) workshop benchmark problems. The numerical results with optimized schemes agree much better with the analytical data than those with the original schemes.

  19. Numerical Solution of Some Types of Fractional Optimal Control Problems

    PubMed Central

    Sweilam, Nasser Hassan; Al-Ajami, Tamer Mostafa; Hoppe, Ronald H. W.

    2013-01-01

    We present two different approaches for the numerical solution of fractional optimal control problems (FOCPs) based on a spectral method using Chebyshev polynomials. The fractional derivative is described in the Caputo sense. The first approach follows the paradigm “optimize first, then discretize” and relies on the approximation of the necessary optimality conditions in terms of the associated Hamiltonian. In the second approach, the state equation is discretized first using the Clenshaw and Curtis scheme for the numerical integration of nonsingular functions followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Two illustrative examples are included to demonstrate the validity and applicability of the suggested approaches. PMID:24385874

  20. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  1. A piecewise linear approximation scheme for hereditary optimal control problems

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1977-01-01

    An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.

  2. A piecewise linear approximation scheme for hereditary optimal control problems

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1977-01-01

    An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.

  3. Overview: Applications of numerical optimization methods to helicopter design problems

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.

  4. Solving nonlinear equality constrained multiobjective optimization problems using neural networks.

    PubMed

    Mestari, Mohammed; Benzirar, Mohammed; Saber, Nadia; Khouil, Meryem

    2015-10-01

    This paper develops a neural network architecture and a new processing method for solving in real time, the nonlinear equality constrained multiobjective optimization problem (NECMOP), where several nonlinear objective functions must be optimized in a conflicting situation. In this processing method, the NECMOP is converted to an equivalent scalar optimization problem (SOP). The SOP is then decomposed into several-separable subproblems processable in parallel and in a reasonable time by multiplexing switched capacitor circuits. The approach which we propose makes use of a decomposition-coordination principle that allows nonlinearity to be treated at a local level and where coordination is achieved through the use of Lagrange multipliers. The modularity and the regularity of the neural networks architecture herein proposed make it suitable for very large scale integration implementation. An application to the resolution of a physical problem is given to show that the approach used here possesses some advantages of the point of algorithmic view, and provides processes of resolution often simpler than the usual techniques.

  5. Issues and Strategies in Solving Multidisciplinary Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya

    2013-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the

  6. Null testing convex optical surfaces.

    PubMed

    Szulc, A

    1997-09-01

    A new test for convex optical surfaces is presented. It makes use of an auxiliary ellipsoidal mirror that is of approximately the same diameter as the convex mirror tested. The test is a null test of excellent precision. The auxiliary ellipsoid used is also tested in a null fashion, permitting good precision to be obtained.

  7. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

    PubMed Central

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005

  8. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem.

    PubMed

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.

  9. Solving quadratic programming problems by delayed projection neural network.

    PubMed

    Yang, Yongqing; Cao, Jinde

    2006-11-01

    In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network.

  10. An artificial immune network for multiobjective optimization problems

    NASA Astrophysics Data System (ADS)

    Lanaridis, Aris; Stafylopatis, Andreas

    2014-08-01

    Multiobjective optimization is an important problem of great complexity and evolutionary algorithms have been established as a dominant approach in the field. This article suggests a method for approximating the Pareto front of a given function based on artificial immune networks. The proposed method uses cloning and mutation on a population of antibodies to create local subsets of the Pareto front. Elements of these local fronts are combined, in a way that maximizes diversity, to form the complete Pareto front of the function. The method is tested on a number of well-known benchmark problems, as well as an engineering problem. Its performance is compared against state-of-the-art algorithms, yielding promising results.

  11. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  12. Enforcing Convexity for Improved Alignment with Constrained Local Models

    PubMed Central

    Wang, Yang; Lucey, Simon; Cohn, Jeffrey F.

    2010-01-01

    Constrained local models (CLMs) have recently demonstrated good performance in non-rigid object alignment/tracking in comparison to leading holistic approaches (e.g., AAMs). A major problem hindering the development of CLMs further, for non-rigid object alignment/tracking, is how to jointly optimize the global warp update across all local search responses. Previous methods have either used general purpose optimizers (e.g., simplex methods) or graph based optimization techniques. Unfortunately, problems exist with both these approaches when applied to CLMs. In this paper, we propose a new approach for optimizing the global warp update in an efficient manner by enforcing convexity at each local patch response surface. Furthermore, we show that the classic Lucas-Kanade approach to gradient descent image alignment can be viewed as a special case of our proposed framework. Finally, we demonstrate that our approach receives improved performance for the task of non-rigid face alignment/tracking on the MultiPIE database and the UNBC-McMaster archive. PMID:20622926

  13. Optimal Parametric Discrete Event Control: Problem and Solution

    SciTech Connect

    Griffin, Christopher H

    2008-01-01

    We present a novel optimization problem for discrete event control, similar in spirit to the optimal parametric control problem common in statistical process control. In our problem, we assume a known finite state machine plant model $G$ defined over an event alphabet $\\Sigma$ so that the plant model language $L = \\LanM(G)$ is prefix closed. We further assume the existence of a \\textit{base control structure} $M_K$, which may be either a finite state machine or a deterministic pushdown machine. If $K = \\LanM(M_K)$, we assume $K$ is prefix closed and that $K \\subseteq L$. We associate each controllable transition of $M_K$ with a binary variable $X_1,\\dots,X_n$ indicating whether the transition is enabled or not. This leads to a function $M_K(X_1,\\dots,X_n)$, that returns a new control specification depending upon the values of $X_1,\\dots,X_n$. We exhibit a branch-and-bound algorithm to solve the optimization problem $\\min_{X_1,\\dots,X_n}\\max_{w \\in K} C(w)$ such that $M_K(X_1,\\dots,X_n) \\models \\Pi$ and $\\LanM(M_K(X_1,\\dots,X_n)) \\in \\Con(L)$. Here $\\Pi$ is a set of logical assertions on the structure of $M_K(X_1,\\dots,X_n)$, and $M_K(X_1,\\dots,X_n) \\models \\Pi$ indicates that $M_K(X_1,\\dots,X_n)$ satisfies the logical assertions; and, $\\Con(L)$ is the set of controllable sublanguages of $L$.

  14. A self-learning particle swarm optimizer for global optimization problems.

    PubMed

    Li, Changhe; Yang, Shengxiang; Nguyen, Trung Thanh

    2012-06-01

    Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.

  15. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining

    PubMed Central

    Zhao, Mingbo; Chui, Kwok Tai

    2017-01-01

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l1-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating lp-norm and Schatten p-norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms. PMID:28714886

  16. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems.

    PubMed

    Shabanzadeh, Parvaneh; Yusof, Rubiyah

    2015-01-01

    Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  17. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

    PubMed Central

    Shabanzadeh, Parvaneh; Yusof, Rubiyah

    2015-01-01

    Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification. PMID:26336509

  18. On the degeneracy of the IMRT optimization problem.

    PubMed

    Alber, M; Meedt, G; Nüsslin, F; Reemtsen, R

    2002-11-01

    One approach to the computation of photon IMRT treatment plans is the formulation of an optimization problem with an objective function that derives from an objective density. An investigation of the second-order properties of such an objective function in a neighborhood of the minimizer opens an intuitive access to many traits of this approach. A general finding is that only a small subset of the parameter space has nonzero curvature, while the objective function is entirely flat in a neighborhood of the minimizer in most directions. The dimension of the subspace of vanishing curvature serves as a measure for the degeneracy of the solution. This finding is important both for algorithm design and evaluation of the mathematical model of clinical intuition, expressed by the objective function. The structure of the subspace of great curvature is found to be imposed on the problem by conflicts between objectives of target and critical structures. These conflicts and their corresponding modes of resolution form a common trait between all reasonable treatment plans of a given case. The high degree of degeneracy makes the use of a conjugate gradient optimization algorithm particularly favorable since the number of iterations to convergence is equivalent to the number of different eigenvalues of the curvature tensor and is hence independent from the number of optimization parameters. A high level of degeneracy of the fluence profiles implies that it should be possible to stipulate further delivery-related conditions without causing severe deterioration of the dose distribution.

  19. An optimization spiking neural p system for approximately solving combinatorial optimization problems.

    PubMed

    Zhang, Gexiang; Rong, Haina; Neri, Ferrante; Pérez-Jiménez, Mario J

    2014-08-01

    Membrane systems (also called P systems) refer to the computing models abstracted from the structure and the functioning of the living cell as well as from the cooperation of cells in tissues, organs, and other populations of cells. Spiking neural P systems (SNPS) are a class of distributed and parallel computing models that incorporate the idea of spiking neurons into P systems. To attain the solution of optimization problems, P systems are used to properly organize evolutionary operators of heuristic approaches, which are named as membrane-inspired evolutionary algorithms (MIEAs). This paper proposes a novel way to design a P system for directly obtaining the approximate solutions of combinatorial optimization problems without the aid of evolutionary operators like in the case of MIEAs. To this aim, an extended spiking neural P system (ESNPS) has been proposed by introducing the probabilistic selection of evolution rules and multi-neurons output and a family of ESNPS, called optimization spiking neural P system (OSNPS), are further designed through introducing a guider to adaptively adjust rule probabilities to approximately solve combinatorial optimization problems. Extensive experiments on knapsack problems have been reported to experimentally prove the viability and effectiveness of the proposed neural system.

  20. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2016-06-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7 ) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here

  1. Mathematical theory of a relaxed design problem in structural optimization

    NASA Technical Reports Server (NTRS)

    Kikuchi, Noboru; Suzuki, Katsuyuki

    1990-01-01

    Various attempts have been made to construct a rigorous mathematical theory of optimization for size, shape, and topology (i.e. layout) of an elastic structure. If these are represented by a finite number of parametric functions, as Armand described, it is possible to construct an existence theory of the optimum design using compactness argument in a finite dimensional design space or a closed admissible set of a finite dimensional design space. However, if the admissible design set is a subset of non-reflexive Banach space such as L(sup infinity)(Omega), construction of the existence theory of the optimum design becomes suddenly difficult and requires to extend (i.e. generalize) the design problem to much more wider class of design that is compatible to mechanics of structures in the sense of variational principle. Starting from the study by Cheng and Olhoff, Lurie, Cherkaev, and Fedorov introduced a new concept of convergence of design variables in a generalized sense and construct the 'G-Closure' theory of an extended (relaxed) optimum design problem. A similar attempt, but independent in large extent, can also be found in Kohn and Strang in which the shape and topology optimization problem is relaxed to allow to use of perforated composites rather than restricting it to usual solid structures. An identical idea is also stated in Murat and Tartar using the notion of the homogenization theory. That is, introducing possibility of micro-scale perforation together with the theory of homogenization, the optimum design problem is relaxed to construct its mathematical theory. It is also noted that this type of relaxed design problem is perfectly matched to the variational principle in structural mechanics.

  2. New attitude penalty functions for spacecraft optimal control problems

    SciTech Connect

    Schaub, H.; Junkins, J.L.; Robinett, R.D.

    1996-03-01

    A solution of a spacecraft optimal control problem, whose cost function relies on an attitude description, usually depends on the choice of attitude coordinates used. A problem could be solved using 3-2-1 Euler angles or using classical Rodriguez parameters and yield two different ``optimal`` solutions, unless the performance index in invariant with respect to the attitude coordinate choice. Another problem arising with many attitude coordinates is that they have no sense of when a body has tumbled beyond 180{degrees} from the reference attitude. In many such cases it would be easier (i.e. cost less) to let the body complete the revolution than to force it to reverse the rotation and return to the desired attitude. This paper develops a universal attitude penalty function g() whose value is independent of the attitude coordinates chosen to represent it. Furthermore, this function will achieve its maximum value only when a principal rotation of {plus_minus}180{degrees} from the target state is performed. This will implicitly permit the g() function to sense the shortest rotational distance back to the reference state. An attitude penalty function which depends on the Modified Rodriguez Parameters (MRP) will also be presented. These recently discovered MRPs are a non-singular three-parameter set which can describe any three-attitude. This MRP penalty function is simpler than the attitude coordinate independent g() function, but retains the useful property of avoiding lengthy principal rotations of more than {plus_minus}180{degrees}.

  3. A new approach to the optimal target selection problem

    NASA Astrophysics Data System (ADS)

    Elson, E. C.; Bassett, B. A.; van der Heyden, K.; Vilakazi, Z. Z.

    2007-03-01

    Context: This paper addresses a common problem in astronomy and cosmology: to optimally select a subset of targets from a larger catalog. A specific example is the selection of targets from an imaging survey for multi-object spectrographic follow-up. Aims: We present a new heuristic optimisation algorithm, HYBRID, for this purpose and undertake detailed studies of its performance. Methods: HYBRID combines elements of the simulated annealing, MCMC and particle-swarm methods and is particularly successful in cases where the survey landscape has multiple curvature or clustering scales. Results: HYBRID consistently outperforms the other methods, especially in high-dimensionality spaces with many extrema. This means many fewer simulations must be run to reach a given performance confidence level and implies very significant advantages in solving complex or computationally expensive optimisation problems. Conclusions: .HYBRID outperforms both MCMC and SA in all cases including optimisation of high dimensional continuous surfaces indicating that HYBRID is useful far beyond the specific problem of optimal target selection. Future work will apply HYBRID to target selection for the new 10 m Southern African Large Telescope in South Africa.

  4. On the robust optimization to the uncertain vaccination strategy problem

    SciTech Connect

    Chaerani, D. Anggriani, N. Firdaniza

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.

  5. Resource efficient gadgets for compiling adiabatic quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; O'Gorman, Bryan; Aspuru-Guzik, Alán

    2013-11-01

    We develop a resource efficient method by which the ground-state of an arbitrary k-local, optimization Hamiltonian can be encoded as the ground-state of a (k-1)-local optimization Hamiltonian. This result is important because adiabatic quantum algorithms are often most easily formulated using many-body interactions but experimentally available interactions are generally 2-body. In this context, the efficiency of a reduction gadget is measured by the number of ancilla qubits required as well as the amount of control precision needed to implement the resulting Hamiltonian. First, we optimize methods of applying these gadgets to obtain 2-local Hamiltonians using the least possible number of ancilla qubits. Next, we show a novel reduction gadget which minimizes control precision and a heuristic which uses this gadget to compile 3-local problems with a significant reduction in control precision. Finally, we present numerics which indicate a substantial decrease in the resources required to implement randomly generated, 3-body optimization Hamiltonians when compared to other methods in the literature.

  6. On convex least squares estimation when the truth is linear.

    PubMed

    Chen, Yining; Wellner, Jon A

    2016-01-01

    We prove that the convex least squares estimator (LSE) attains a n(-1/2) pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.

  7. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  8. Project impact analysis as an optimal control problem

    SciTech Connect

    Anandalingam, G.

    1981-01-01

    This paper analyzes the effects of a major investment project on a multi-sector less developed economy. Single investment projects with external effects reaching across the entire economy are frequently encountered in developing countries. This study concentrates on the Mahaweli Ganga Development Project in Sri Lanka, a multi-dam irrigation and hydroelectric power project. The Mahaweli Project calls for an annual investment level, in 1970 prices, of Rs 2200 million (US $150 million) over a period of six years, which is 50 percent of the annual expenditure of the government. The project would thus require a large fraction of total investment over a medium term planning period and would materially alter the existing supply and demand for major goods and services. The project is sufficiently large that its effect is economy-wide. The model we use is a dynamic input-output optimizing model having the mathematical structure of an Optimal Control problem.

  9. A Review on Medical Image Registration as an Optimization Problem.

    PubMed

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-08-01

    In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration.

  10. Multigrid methods for parabolic distributed optimal control problems

    NASA Astrophysics Data System (ADS)

    Borzì, Alfio

    2003-08-01

    Multigrid schemes that solve parabolic distributed optimality systems discretized by finite differences are investigated. Accuracy properties of finite difference approximation are discussed and validated. Two multigrid methods are considered which are based on a robust relaxation technique and use two different coarsening strategies: semicoarsening and standard coarsening. The resulting multigrid algorithms show robustness with respect to changes of the value of [nu], the weight of the cost of the control, is sufficiently small. Fourier mode analysis is used to investigate the dependence of the linear twogrid convergence factor on [nu] and on the discretization parameters. Results of numerical experiments are reported that demonstrate sharpness of Fourier analysis estimates. A multigrid algorithm that solves optimal control problems with box constraints on the control is considered.

  11. Multiresolution subspace-based optimization method for inverse scattering problems.

    PubMed

    Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea

    2011-10-01

    This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.

  12. The Projection Neural Network for Solving Convex Nonlinear Programming

    NASA Astrophysics Data System (ADS)

    Yang, Yongqing; Xu, Xianyun

    In this paper, a projection neural network for solving convex optimization is investigated. Using Lyapunov stability theory and LaSalle invariance principle, the proposed network is showed to be globally stable and converge to exact optimal solution. Two examples show the effectiveness of the proposed neural network model.

  13. Exactly Solvable Hierarchical Optimization Problem Related to Percolation

    NASA Astrophysics Data System (ADS)

    Fink, Thomas M.; Ball, Robin C.

    1996-04-01

    We consider a sequence of elementary decisions which must be made in light of successive information learned. A key feature is that the decisions must balance the reduction of immediate cost against learning information and hence securing a wider range of future options-a conflict which motivates us to attach a value to information. We analytically derive an optimal decision policy; while each individual decision is elementary, the solution to the collective problem, which may be interpreted as a novel percolation model, exhibits a phase transition and finite size scaling.

  14. On representation formulas for long run averaging optimal control problem

    NASA Astrophysics Data System (ADS)

    Buckdahn, R.; Quincampoix, M.; Renault, J.

    2015-12-01

    We investigate an optimal control problem with an averaging cost. The asymptotic behaviour of the values is a classical problem in ergodic control. To study the long run averaging we consider both Cesàro and Abel means. A main result of the paper says that there is at most one possible accumulation point - in the uniform convergence topology - of the values, when the time horizon of the Cesàro means converges to infinity or the discount factor of the Abel means converges to zero. This unique accumulation point is explicitly described by representation formulas involving probability measures on the state and control spaces. As a byproduct we obtain the existence of a limit value whenever the Cesàro or Abel values are equicontinuous. Our approach allows to generalise several results in ergodic control, and in particular it allows to cope with cases where the limit value is not constant with respect to the initial condition.

  15. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  16. Can linear superiorization be useful for linear optimization problems?

    NASA Astrophysics Data System (ADS)

    Censor, Yair

    2017-04-01

    Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.

  17. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  18. Fast solvers for optimal control problems from pattern formation

    NASA Astrophysics Data System (ADS)

    Stoll, Martin; Pearson, John W.; Maini, Philip K.

    2016-01-01

    The modeling of pattern formation in biological systems using various models of reaction-diffusion type has been an active research topic for many years. We here look at a parameter identification (or PDE-constrained optimization) problem where the Schnakenberg and Gierer-Meinhardt equations, two well-known pattern formation models, form the constraints to an objective function. Our main focus is on the efficient solution of the associated nonlinear programming problems via a Lagrange-Newton scheme. In particular we focus on the fast and robust solution of the resulting large linear systems, which are of saddle point form. We illustrate this by considering several two- and three-dimensional setups for both models. Additionally, we discuss an image-driven formulation that allows us to identify parameters of the model to match an observed quantity obtained from an image.

  19. Evaluating the importance of the convex hull in solving the Euclidean version of the traveling salesperson problem: reply to Lee and Vickers (2000).

    PubMed

    MacGregor, J N; Ormerod, T C

    2000-10-01

    Lee and Vickers (2000) suggest that the results of MacGregor and Ormerod (1996), showing that the response uncertainty to traveling salesperson problems (TSPs) increases with increasing numbers of nonboundary points, may have resulted as an artifact of constraints imposed in the construction of stimuli. The fact that similar patterns of results have been obtained for our "constrained" stimuli, for a stimulus constructed under different constraints, for 13 randomly generated stimuli, and for random and patterned 48-point problems provides empirical evidence that the results are not artifactual. Lee and Vickers further suggest that, even if not artifactual, the results are in principle limited to arrays of fewer than 50 points and that, beyond this, the total number of points and number of nonboundary points are "diagnostically equivalent." This claim seems to us incorrect, since arrays of any size can be constructed that will permit experimental tests of whether problem difficulty is influenced by the number of nonboundary points, or the total number of points, or both. We present a reanalysis of our original data using hierarchical regression analysis which indicates that both factors may influence problem complexity.

  20. Analysis of optimal and near-optimal continuous-thrust transfer problems in general circular orbit

    NASA Astrophysics Data System (ADS)

    Kéchichian, Jean A.

    2009-09-01

    A pair of practical problems in optimal continuous-thrust transfer in general circular orbit is analyzed within the context of analytic averaging for rapid computations leading to near-optimal solutions. The first problem addresses the minimum-time transfer between inclined circular orbits by proposing an analytic solution based on a split-sequence strategy in which the equatorial inclination and node controls are done separately by optimally selecting the intermediate orbit size at the sequence switch point that results in the minimum-time transfer. The consideration of the equatorial inclination and node state variables besides the orbital velocity variable is needed to further account for the important J2 perturbation that precesses the orbit plane during the transfer, unlike the thrust-only case in which it is sufficient to consider the relative inclination and velocity variables thus reducing the dimensionality of the system equations. Further extensions of the split-sequence strategy with analytic J2 effect are thus possible for equal computational ease. The second problem addresses the maximization of the equatorial inclination in fixed time by adopting a particular thrust-averaging scheme that controls only the inclination and velocity variables, leaving the node at the mercy of the J2 precession, providing robust fast-converging codes that lead to efficient near-optimal solutions. Example transfers for both sets of problems are solved showing near-optimal features as far as transfer time is concerned, by directly comparing the solutions to "exact" purely numerical counterparts that rely on precision integration of the raw unaveraged system dynamics with continuously varying thrust vector orientation in three-dimensional space.

  1. Enhanced ant colony optimization for inventory routing problem

    NASA Astrophysics Data System (ADS)

    Wong, Lily; Moin, Noor Hasnah

    2015-10-01

    The inventory routing problem (IRP) integrates and coordinates two important components of supply chain management which are transportation and inventory management. We consider a one-to-many IRP network for a finite planning horizon. The demand for each product is deterministic and time varying as well as a fleet of capacitated homogeneous vehicles, housed at a depot/warehouse, delivers the products from the warehouse to meet the demand specified by the customers in each period. The inventory holding cost is product specific and is incurred at the customer sites. The objective is to determine the amount of inventory and to construct a delivery routing that minimizes both the total transportation and inventory holding cost while ensuring each customer's demand is met over the planning horizon. The problem is formulated as a mixed integer programming problem and is solved using CPLEX 12.4 to get the lower and upper bound (best integer) for each instance considered. We propose an enhanced ant colony optimization (ACO) to solve the problem and the built route is improved by using local search. The computational experiments demonstrating the effectiveness of our approach is presented.

  2. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2016-08-31

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. Firstly, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve nonrigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  3. Dynamic Grover search: applications in recommendation systems and optimization problems

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep

    2017-06-01

    In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.

  4. Stochastic convex sparse principal component analysis.

    PubMed

    Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu

    2016-12-01

    Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.

  5. Phylogenetic network analysis as a parsimony optimization problem.

    PubMed

    Wheeler, Ward C

    2015-09-17

    Many problems in comparative biology are, or are thought to be, best expressed as phylogenetic "networks" as opposed to trees. In trees, vertices may have only a single parent (ancestor), while networks allow for multiple parent vertices. There are two main interpretive types of networks, "softwired" and "hardwired." The parsimony cost of hardwired networks is based on all changes over all edges, hence must be greater than or equal to the best tree cost contained ("displayed") by the network. This is in contrast to softwired, where each character follows the lowest parsimony cost tree displayed by the network, resulting in costs which are less than or equal to the best display tree. Neither situation is ideal since hard-wired networks are not generally biologically attractive (since individual heritable characters can have more than one parent) and softwired networks can be trivially optimized (containing the best tree for each character). Furthermore, given the alternate cost scenarios of trees and these two flavors of networks, hypothesis testing among these explanatory scenarios is impossible. A network cost adjustment (penalty) is proposed to allow phylogenetic trees and soft-wired phylogenetic networks to compete equally on a parsimony optimality basis. This cost is demonstrated for several real and simulated datasets. In each case, the favored graph representation (tree or network) matched expectation or simulation scenario. The softwired network cost regime proposed here presents a quantitative criterion for an optimality-based search procedure where trees and networks can participate in hypothesis testing simultaneously.

  6. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization

    PubMed Central

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive. PMID:26880872

  7. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization.

    PubMed

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive.

  8. Radio interferometric gain calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Smirnov, O. M.; Tasse, C.

    2015-05-01

    Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as STEFCAL and peeling can be understood as special cases of this, and place them in the context of the general formalism. Finally, we present an implementation and some applied results of COHJONES, another specialized direction-dependent calibration algorithm derived from the formalism.

  9. A convex hull inclusion test.

    PubMed

    Bailey, T; Cowles, J

    1987-02-01

    A new characterization of the interior of the convex hull of a finite point set is given. An inclusion test based on this characterization is, on average, almost linear in the number of points times the dimensionality.

  10. Solving Globally-Optimal Threading Problems in ''Polynomial-Time''

    SciTech Connect

    Uberbacher, E.C.; Xu, D.; Xu, Y.

    1999-04-12

    Computational protein threading is a powerful technique for recognizing native-like folds of a protein sequence from a protein fold database. In this paper, we present an improved algorithm (over our previous work) for solving the globally-optimal threading problem, and illustrate how the computational complexity and the fold recognition accuracy of the algorithm change as the cutoff distance for pairwise interactions changes. For a given fold of m residues and M core secondary structures (or simply cores) and a protein sequence of n residues, the algorithm guarantees to find a sequence-fold alignment (threading) that is globally optimal, measured collectively by (1) the singleton match fitness, (2) pairwise interaction preference, and (3) alignment gap penalties, in O(mn + MnN{sup 1.5C-1}) time and O(mn + nN{sup C-1}) space. C, the topological complexity of a fold as we term, is a value which characterizes the overall structure of the considered pairwise interactions in the fold, which are typically determined by a specified cutoff distance between the beta carbon atoms of a pair of amino acids in the fold. C is typically a small positive integer. N represents the maximum number of possible alignments between an individual core of the fold and the protein sequence when its neighboring cores are already aligned, and its value is significantly less than n. When interacting amino acids are required to see each other, C is bounded from above by a small integer no matter how large the cutoff distance is. This indicates that the protein threading problem is polynomial-time solvable if the condition of seeing each other between interacting amino acids is sufficient for accurate fold recognition. A number of extensions have been made to our basic threading algorithm to allow finding a globally-optimal threading under various constraints, which include consistencies with (1) specified secondary structures (both cores and loops), (2) disulfide bonds, (3) active sites, etc.

  11. Optimality Functions in Stochastic Programming

    DTIC Science & Technology

    2009-12-02

    nonconvex. Non - convex stochastic optimization problems arise in such diverse applications as estimation of mixed logit models [2], engineering design...first- order necessary optimality conditions ; see for example Propositions 3.3.1 and 3.3.5 in [7] or Theorem 2.2.4 in [25]. If the evaluation of f j...procedures for validation analysis of a candidate point x ∈ IRn. Since P may be nonconvex, we focus on first-order necessary optimality conditions as

  12. Convex polytopes and quantum separability

    SciTech Connect

    Holik, F.; Plastino, A.

    2011-12-15

    We advance a perspective of the entanglement issue that appeals to the Schlienz-Mahler measure [Phys. Rev. A 52, 4396 (1995)]. Related to it, we propose a criterium based on the consideration of convex subsets of quantum states. This criterium generalizes a property of product states to convex subsets (of the set of quantum states) that is able to uncover an interesting geometrical property of the separability property.

  13. Difficulties in Evolutionary Multiobjective Optimization for Many-Objective Optimization Problems and Their Scalability Improvement Techniques

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Noritaka; Nojima, Yusuke; Ishibuchi, Hisao

    In this paper, we examine the behavior of evolutionary multiobjective optimization (EMO) algorithms to clarify the difficulties in their scalability to many-objective optimization problems. Whereas EMO algorithms usually work well on two-objective problems, it has also been reported that they do not work well on many-objective problems. First, we examine the behavior of the most well-known and frequently-used Pareto-based EMO algorithm (i. e. , NSGA-II) on many-objective 0/1 knapsack problems. Experimental results show that the search ability of NSGA-II is severely deteriorated by the increase in the number of objectives. This is because the selection pressure toward the Pareto front is severely weakened by the increase in the number of non-dominated solutions. Next we briefly review some approaches to the scalability improvement of EMO algorithms to many-objective problems. Then we examine their effects on the search ability of NSGA-II. Experimental results show that the improvement in the convergence of solutions to the Pareto front often leads to the decrease in their diversity.

  14. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    SciTech Connect

    Kmet', Tibor; Kmet'ova, Maria

    2009-09-09

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  15. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  16. The Convex Coordinates of the Symmedian Point

    ERIC Educational Resources Information Center

    Boyd, J. N.; Raychowdhury, P. N.

    2006-01-01

    In this note, we recall the convex (or barycentric) coordinates of the points of a closed triangular region. We relate the convex and trilinear coordinates of the interior points of the triangular region. We use the relationship between convex and trilinear coordinates to calculate the convex coordinates of the symmedian point of the triangular…

  17. An optimal iterative solver for the Stokes problem

    SciTech Connect

    Wathen, A.; Silvester, D.

    1994-12-31

    Discretisations of the classical Stokes Problem for slow viscous incompressible flow gives rise to systems of equations in matrix form for the velocity u and the pressure p, where the coefficient matrix is symmetric but necessarily indefinite. The square submatrix A is symmetric and positive definite and represents a discrete (vector) Laplacian and the submatrix C may be the zero matrix or more generally will be symmetric positive semi-definite. For `stabilised` discretisations (C {ne} 0) and descretisations which are inherently `stable` (C = 0) and so do not admit spurious pressure components even as the mesh size, h approaches zero, the Schur compliment of the matrix has spectral condition number independent of h (given also that B is bounded). Here the authors will show how this property together with a multigrid preconditioner only for the Laplacian block A yields an optimal solver for the Stokes problem through use of the Minimum Residual iteration. That is, combining Minimum Residual iteration for the matrix equation with a block preconditioner which comprises a small number of multigrid V-cycles for the Laplacian block A together with a simple diagonal scaling block provides an iterative solution procedure for which the computational work grows only linearly with the problem size.

  18. Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis

    DTIC Science & Technology

    2016-10-11

    constrained quadratic programs, and the matrix completion problems with non-convex regularity. The project addresses a fundamental question how to...efficiently solve these problems, such as to find a provably high quality approximate solution or to fast find a local solution with probable structure ...applications in optimal and dynamic resource management, cardinality constrained quadratic programs, and the matrix completion problems with non

  19. Some Randomized Algorithms for Convex Quadratic Programming

    SciTech Connect

    Goldbach, R.

    1999-01-15

    We adapt some randomized algorithms of Clarkson [3] for linear programming to the framework of so-called LP-type problems, which was introduced by Sharir and Welzl [10]. This framework is quite general and allows a unified and elegant presentation and analysis. We also show that LP-type problems include minimization of a convex quadratic function subject to convex quadratic constraints as a special case, for which the algorithms can be implemented efficiently, if only linear constraints are present. We show that the expected running times depend only linearly on the number of constraints, and illustrate this by some numerical results. Even though the framework of LP-type problems may appear rather abstract at first, application of the methods considered in this paper to a given problem of that type is easy and efficient. Moreover, our proofs are in fact rather simple, since many technical details of more explicit problem representations are handled in a uniform manner by our approach. In particular, we do not assume boundedness of the feasible set as required in related methods.

  20. Solving Nonlinear Optimization Problems of Real Functions in Complex Variables by Complex-Valued Iterative Methods.

    PubMed

    Zhang, Songchuan; Xia, Youshen

    2016-12-28

    Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an ℓ₁-norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.

  1. Solving the optimal control problem of the parabolic PDEs in exploitation of oil

    NASA Astrophysics Data System (ADS)

    Effati, S.; Janfada, M.; Esmaeili, M.

    2008-04-01

    In this paper, the optimal control problem is governed by weak coupled parabolic PDEs and involves pointwise state and control constraints. We use measure theory method for solving this problem. In order to use the weak solution of problem, first problem has been transformed into measure form. This problem is reduced to a linear programming problem. Then we obtain an optimal measure which is approximated by a finite combination of atomic measures. We find piecewise-constant optimal control functions which are an approximate control for the original optimal control problem.

  2. Applying Soft Arc Consistency to Distributed Constraint Optimization Problems

    NASA Astrophysics Data System (ADS)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi

    The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.

  3. Center for Shape Optimization and Material Layout

    DTIC Science & Technology

    1992-01-01

    that eventually participate in the optimal layout for non -self-adjoint problems . Currently, these microstructures are worked out numerically [6...the fourth order problem arising in the theory of plates. 1.2 The Fourth Order Problems Direct Approach in the Optimal Design of Plates. The state of... constraint set. In fact, the constraint set is not only nonlinear, its also non -smooth, and even non - convex . Worst of all, we do not even have an analytic

  4. Human opinion dynamics: An inspiration to solve complex optimization problems

    PubMed Central

    Kaur, Rishemjit; Kumar, Ritesh; Bhondekar, Amol P.; Kapur, Pawan

    2013-01-01

    Human interactions give rise to the formation of different kinds of opinions in a society. The study of formations and dynamics of opinions has been one of the most important areas in social physics. The opinion dynamics and associated social structure leads to decision making or so called opinion consensus. Opinion formation is a process of collective intelligence evolving from the integrative tendencies of social influence with the disintegrative effects of individualisation, and therefore could be exploited for developing search strategies. Here, we demonstrate that human opinion dynamics can be utilised to solve complex mathematical optimization problems. The results have been compared with a standard algorithm inspired from bird flocking behaviour and the comparison proves the efficacy of the proposed approach in general. Our investigation may open new avenues towards understanding the collective decision making. PMID:24141795

  5. Optimization-based interactive segmentation interface for multiregion problems.

    PubMed

    Baxter, John S H; Rajchl, Martin; Peters, Terry M; Chen, Elvis C S

    2016-04-01

    Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality.

  6. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  7. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  8. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  9. The protein folding problem: global optimization of the force fields.

    PubMed

    Scheraga, H A; Liwo, A; Oldziej, S; Czaplewski, C; Pillardy, J; Ripoll, D R; Vila, J A; Kazmierkiewicz, R; Saunders, J A; Arnautova, Y A; Jagielska, A; Chinchio, M; Nanias, M

    2004-09-01

    The evolutionary development of a theoretical approach to the protein folding problem, in our laboratory, is traced. The theoretical foundations and the development of a suitable empirical all-atom potential energy function and a global optimization search are examined. Whereas the all-atom approach has thus far succeeded for relatively small molecules and for alpha-helical proteins containing up to 46 residues, it has been necessary to develop a hierarchical approach to treat larger proteins. In the hierarchical approach to single- and multiple-chain proteins, global optimization is carried out for a simplified united residue (UNRES) description of a polypeptide chain to locate the region in which the global minimum lies. Conversion of the UNRES structures in this region to all-atom structures is followed by a local search in this region. The performance of this approach in successive CASP blind tests for predicting protein structure by an ab initio physics-based method is described. Finally, a recent attempt to compute a folding pathway is discussed.

  10. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  11. Multiagent optimization system for solving the traveling salesman problem (TSP).

    PubMed

    Xie, Xiao-Feng; Liu, Jiming

    2009-04-01

    The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.

  12. Multi-label Moves for MRFs with Truncated Convex Priors

    NASA Astrophysics Data System (ADS)

    Veksler, Olga

    Optimization with graph cuts became very popular in recent years. As more applications rely on graph cuts, different energy functions are being employed. Recent evaluation of optimization algorithms showed that the widely used swap and expansion graph cut algorithms have an excellent performance for energies where the underlying MRF has Potts prior. Potts prior corresponds to assuming that the true labeling is piecewise constant. While surprisingly useful in practice, Potts prior is clearly not appropriate in many circumstances. However for more general priors, the swap and expansion algorithms do not perform as well. Both algorithms are based on moves that give each pixel a choice of only two labels. Therefore such moves can be referred to as binary moves. Recently, range moves that act on multiple labels simultaneously were introduced. As opposed to swap and expansion, each pixel has a choice of more than two labels in a range move. Therefore we call them multi-label moves. Range moves were shown to work better for problems with truncated convex priors, which imply a piecewise smooth labeling. Inspired by range moves, we develop several different variants of multi-label moves. We evaluate them on the problem of stereo correspondence and discuss their relative merits.

  13. Multiswarm comprehensive learning particle swarm optimization for solving multiobjective optimization problems

    PubMed Central

    Yu, Xiang; Zhang, Xueqing

    2017-01-01

    Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle’s personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run. PMID:28192508

  14. Robust semidefinite programming approach to the separability problem

    SciTech Connect

    Brandao, Fernando G.S.L.; Vianna, Reinaldo O.

    2004-12-01

    We express the optimization of entanglement witnesses for arbitrary bipartite states in terms of a class of convex optimization problems known as robust semidefinite programs (RSDPs). We propose, using well known properties of RSDPs, several sufficient tests for separability of mixed states. Our results are then generalized to multipartite density operators.

  15. A proof of convergence of the concave-convex procedure using Zangwill's theory.

    PubMed

    Sriperumbudur, Bharath K; Lanckriet, Gert R G

    2012-06-01

    The concave-convex procedure (CCCP) is an iterative algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms, including sparse support vector machines (SVMs), transductive SVMs, and sparse principal component analysis. Though CCCP is widely used in many applications, its convergence behavior has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper; however, we believe the analysis is not complete. The convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), proposed in the global optimization literature to solve general d.c. programs, whose proof relies on d.c. duality. In this note, we follow a different reasoning and show how Zangwill's global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP. This underlines Zangwill's theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectation-maximization and generalized alternating minimization. In this note, we provide a rigorous analysis of the convergence of CCCP by addressing two questions: When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? and when does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP.

  16. High-order entropy-based closures for linear transport in slab geometry II: A computational study of the optimization problem

    SciTech Connect

    Hauck, Cory D; Alldredge, Graham; Tits, Andre

    2012-01-01

    We present a numerical algorithm to implement entropy-based (M{sub N}) moment models in the context of a simple, linear kinetic equation for particles moving through a material slab. The closure for these models - as is the case for all entropy-based models - is derived through the solution of constrained, convex optimization problem. The algorithm has two components. The first component is a discretization of the moment equations which preserves the set of realizable moments, thereby ensuring that the optimization problem has a solution (in exact arithmetic). The discretization is a second-order kinetic scheme which uses MUSCL-type limiting in space and a strong-stability-preserving, Runge-Kutta time integrator. The second component of the algorithm is a Newton-based solver for the dual optimization problem, which uses an adaptive quadrature to evaluate integrals in the dual objective and its derivatives. The accuracy of the numerical solution to the dual problem plays a key role in the time step restriction for the kinetic scheme. We study in detail the difficulties in the dual problem that arise near the boundary of realizable moments, where quadrature formulas are less reliable and the Hessian of the dual objection function is highly ill-conditioned. Extensive numerical experiments are performed to illustrate these difficulties. In cases where the dual problem becomes 'too difficult' to solve numerically, we propose a regularization technique to artificially move moments away from the realizable boundary in a way that still preserves local particle concentrations. We present results of numerical simulations for two challenging test problems in order to quantify the characteristics of the optimization solver and to investigate when and how frequently the regularization is needed.

  17. Pseudospectral Collocation Methods for the Direct Transcription of Optimal Control Problems

    DTIC Science & Technology

    2003-04-01

    solving optimal control problems for trajectory optimization, spacecraft attitude control, jet thruster control, missile guidance and many other... optimal control problems using a pseudospectral direct transcription method. These problems are stated here so that they may be referred to elsewhere...e.g., [7]. 2.3 Prototypical Examples Throughout this thesis two example problems are used to demonstrate various prop- erties associated with solving

  18. Revisiting the method of characteristics via a convex hull algorithm

    NASA Astrophysics Data System (ADS)

    LeFloch, Philippe G.; Mercier, Jean-Marc

    2015-10-01

    We revisit the method of characteristics for shock wave solutions to nonlinear hyperbolic problems and we propose a novel numerical algorithm-the convex hull algorithm (CHA)-which allows us to compute both entropy dissipative solutions (satisfying all entropy inequalities) and entropy conservative (or multi-valued) solutions. From the multi-valued solutions determined by the method of characteristics, our algorithm "extracts" the entropy dissipative solutions, even after the formation of shocks. It applies to both convex and non-convex flux/Hamiltonians. We demonstrate the relevance of the proposed method with a variety of numerical tests, including conservation laws in one or two spatial dimensions and problem arising in fluid dynamics.

  19. Optimal Control Problem of Feeding Adaptations of Daphnia and Neural Network Simulation

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ov, Mria

    2010-09-01

    A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints and open final time. The optimal control problem is transcribed into nonlinear programming problem, which is implemented with adaptive critic neural network [9] and recurrent neural network for solving nonlinear proprojection equations [10]. The proposed simulation methods is illustrated by the optimal control problem of feeding adaptation of filter feeders of Daphnia. Results show that adaptive critic based systematic approach and neural network solving of nonlinear equations hold promise for obtaining the optimal control with control and state constraints and open final time.

  20. Solving a class of geometric programming problems by an efficient dynamic model

    NASA Astrophysics Data System (ADS)

    Nazemi, Alireza; Sharifi, Elahe

    2013-03-01

    In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.

  1. Active Batch Selection via Convex Relaxations with Guaranteed Solution Bounds.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Sun, Qian; Panchanathan, Sethuraman; Ye, Jieping

    2015-10-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar instances for manual annotation. More recently, there have been attempts towards a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. In this paper, we propose two novel batch mode active learning (BMAL) algorithms: BatchRank and BatchRand. We first formulate the batch selection task as an NP-hard optimization problem; we then propose two convex relaxations, one based on linear programming and the other based on semi-definite programming to solve the batch selection problem. Finally, a deterministic bound is derived on the solution quality for the first relaxation and a probabilistic bound for the second. To the best of our knowledge, this is the first research effort to derive mathematical guarantees on the solution quality of the BMAL problem. Our extensive empirical studies on 15 binary, multi-class and multi-label challenging datasets corroborate that the proposed algorithms perform at par with the state-of-the-art techniques, deliver high quality solutions and are robust to real-world issues like label noise and class imbalance.

  2. The Convex Geometry of Linear Inverse Problems

    DTIC Science & Technology

    2010-12-02

    equator. Via elementary trigonometry , the solid angle that K subtends is given by π/2− sin−1(h). Hence, if h(β) is the largest number such that β caps of...1107–1130. [34] Harris, J., Algebraic Geometry: A First Course . Springer. [35] Haupt, J., Bajwa, W., Raz, G., and Nowak, R. (2008). Toeplitz

  3. Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control

    SciTech Connect

    Gaitsgory, Vladimir; Rossomakhine, Sergey

    2015-04-15

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.

  4. A sparse superlinearly convergent SQP with applications to two-dimensional shape optimization.

    SciTech Connect

    Anitescu, M.

    1998-04-15

    Discretization of optimal shape design problems leads to very large nonlinear optimization problems. For attaining maximum computational efficiency, a sequential quadratic programming (SQP) algorithm should achieve superlinear convergence while preserving sparsity and convexity of the resulting quadratic programs. Most classical SQP approaches violate at least one of the requirements. We show that, for a very large class of optimization problems, one can design SQP algorithms that satisfy all these three requirements. The improvements in computational efficiency are demonstrated for a cam design problem.

  5. Generalized Hill Climbing Algorithms For Discrete Optimization Problems

    DTIC Science & Technology

    1997-01-09

    problems. The three problems include: (a) a flexible assembly system design ( FASD ) problem (Kumar and Jacobson [1996]), (b) a generic configuration...1991, pg 424]). The same seed, 123, was used to initiate all experiments. 6.1 Flexible Assembly System Design Problem The FASD problem is a precedence...show that the FASD problem is NP-complete (Garey and Johnson [1979, pg 17]). Jacobson et. al [1996] propose a simple matrix-based, polynomial-time

  6. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  7. Methods of centers and methods of feasible directions for the solution of optimal control problems.

    NASA Technical Reports Server (NTRS)

    Polak, E.; Mukai, H.; Pironneau, O.

    1971-01-01

    Demonstration of the applicability of methods of centers and of methods of feasible directions to optimal control problems. Presented experimental results show that extensions of Frank-Wolfe (1956), Zoutendijk (1960), and Pironneau-Polak (1971) algorithms for nonlinear programming problems can be quite efficient in solving optimal control problems.

  8. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics

  9. Test Problems for Large-Scale Multiobjective and Many-Objective Optimization.

    PubMed

    Cheng, Ran; Jin, Yaochu; Olhofer, Markus; Sendhoff, Bernhard

    2016-08-26

    The interests in multiobjective and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multiobjective and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multiobjective and many-objective optimization problems may involve a large number of decision variables. As has been evident in the history of evolutionary optimization, the development of evolutionary algorithms (EAs) for solving a particular type of optimization problems has undergone a co-evolution with the development of test problems. To promote the research on large-scale multiobjective and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multiobjective and many-objective optimization. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and nonuniform correlation between decision variables and objective functions. To assess the proposed test problems, six representative evolutionary multiobjective and many-objective EAs are tested on the proposed test problems. Our empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.

  10. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is

  11. Generalized geometrically convex functions and inequalities.

    PubMed

    Noor, Muhammad Aslam; Noor, Khalida Inayat; Safdar, Farhat

    2017-01-01

    In this paper, we introduce and study a new class of generalized functions, called generalized geometrically convex functions. We establish several basic inequalities related to generalized geometrically convex functions. We also derive several new inequalities of the Hermite-Hadamard type for generalized geometrically convex functions. Several special cases are discussed, which can be deduced from our main results.

  12. Detection of Convexity and Concavity in Context

    ERIC Educational Resources Information Center

    Bertamini, Marco

    2008-01-01

    Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…

  13. Revisiting separation properties of convex fuzzy sets

    USDA-ARS?s Scientific Manuscript database

    Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...

  14. Detection of Convexity and Concavity in Context

    ERIC Educational Resources Information Center

    Bertamini, Marco

    2008-01-01

    Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…

  15. On the application of deterministic optimization methods to stochastic control problems

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.; Athans, M.

    1974-01-01

    A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.

  16. A numerical method for solving optimal control problems using state parametrization

    NASA Astrophysics Data System (ADS)

    Mehne, H.; Borzabadi, A.

    2006-06-01

    A numerical method for solving a special class of optimal control problems is given. The solution is based on state parametrization as a polynomial with unknown coefficients. This converts the problem to a non-linear optimization problem. To facilitate the computation of optimal coefficients, an improved iterative method is suggested. Convergence of this iterative method and its implementation for numerical examples are also given.

  17. Pontriagin's maximum principle in problems of the optimization of diffraction electronic instruments

    NASA Astrophysics Data System (ADS)

    Tarasov, M. M.; Tretiakov, O. A.; Shestopalov, V. P.; Shmatko, A. A.

    The possibilities afforded by the use of Pontriagin's maximum principle in electronics are demonstrated by applying it to an optimization problem of practical interest. The problem consists of optimizing the parameters of a maximum-efficiency diffraction oscillator for a given power output over a specified frequency range for a selected electron gun. As an example, the optimization problem is solved numerically for an oscillator with a Gaussian distribution of the high-frequency field envelope in an open resonator.

  18. A maximum principle for smooth optimal impulsive control problems with multipoint state constraints

    NASA Astrophysics Data System (ADS)

    Dykhta, V. A.; Samsonyuk, O. N.

    2009-06-01

    A nonlinear optimal impulsive control problem with trajectories of bounded variation subject to intermediate state constraints at a finite number on nonfixed instants of time is considered. Features of this problem are discussed from the viewpoint of the extension of the classical optimal control problem with the corresponding state constraints. A necessary optimality condition is formulated in the form of a smooth maximum principle; thorough comments are given, a short proof is presented, and examples are discussed.

  19. Plant/Controller Optimization by Convex Methods

    DTIC Science & Technology

    1994-06-01

    Issac Kaminer for his direction and patient instruction. I shall always be impressed by Issac’s breadth of understanding in the subtleties of control...The following algorithm outlines the Newton search for the analytic center, x’(Xw): 16 1. Initialize the Newton search with i**𔃻* = a-**’. 2...A,t, -^—j. ,2.7) 3. Determine the Newton decrement, S, and the damping factor, a: Six^) = y/g(x(k’’))TH(x^’))-1g(x(k’’)) (2.8) Q(X

  20. On optimal solutions of the constrained ℓ 0 regularization and its penalty problem

    NASA Astrophysics Data System (ADS)

    Zhang, Na; Li, Qia

    2017-02-01

    The constrained {{\\ell}0} regularization plays an important role in sparse reconstruction. A widely used approach for solving this problem is the penalty method, of which the least square penalty problem is a special case. However, the connections between global minimizers of the constrained {{\\ell}0} problem and its penalty problem have never been studied in a systematic way. This work provides a comprehensive investigation on optimal solutions of these two problems and their connections. We give detailed descriptions of optimal solutions of the two problems, including existence, stability with respect to the parameter, cardinality and strictness. In particular, we find that the optimal solution set of the penalty problem is piecewise constant with respect to the penalty parameter. Then we analyze in-depth the relationship between optimal solutions of the two problems. It is shown that, in the noisy case the least square penalty problem probably has no common optimal solutions with the constrained {{\\ell}0} problem for any penalty parameter. Under a mild condition on the penalty function, we establish that the penalty problem has the same optimal solution set as the constrained {{\\ell}0} problem when the penalty parameter is sufficiently large. Based on the conditions, we further propose exact penalty problems for the constrained {{\\ell}0} problem. Finally, we present a numerical example to illustrate our main theoretical results.

  1. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  2. Convexity preserving C2 rational quadratic trigonometric spline

    NASA Astrophysics Data System (ADS)

    Dube, Mridula; Tiwari, Preeti

    2012-09-01

    A C2 rational quadratic trigonometric spline interpolation has been studied using two kind of rational quadratic trigonometric splines. It is shown that under some natural conditions the solution of the problem exits and is unique. The necessary and sufficient condition that constrain the interpolation curves to be convex in the interpolating interval or subinterval are derived.

  3. Method of orthogonal simplexes and its applications to convex programming

    NASA Astrophysics Data System (ADS)

    Bulatov, V. P.

    2008-04-01

    Numerical methods for solving a convex programming problem are considered whose guaranteed convergence rate depends only on the space dimension. On average, the ratio of the corresponding geometric progression is better than that in the basis model of ellipsoids or simplexes. Results of numerical experiments are presented.

  4. Non-convex entropies for conservation laws with involutions.

    PubMed

    Dafermos, Constantine M

    2013-12-28

    The paper discusses systems of conservation laws endowed with involutions and contingent entropies. Under the assumption that the contingent entropy function is convex merely in the direction of a cone in state space, associated with the involution, it is shown that the Cauchy problem is locally well posed in the class of classical solutions, and that classical solutions are unique and stable even within the broader class of weak solutions that satisfy an entropy inequality. This is on a par with the classical theory of solutions to hyperbolic systems of conservation laws endowed with a convex entropy. The equations of elastodynamics provide the prototypical example for the above setting.

  5. Approximation of the optimal-time problem for controlled differential inclusions

    SciTech Connect

    Otakulov, S.

    1995-01-01

    One of the common methods for numerical solution of optimal control problems constructs an approximating sequence of discrete control problems. The approximation method is also attractive because it can be used as an effective tool for analyzing optimality conditions and other topics in optimization theory. In this paper, we consider the approximation of optimal-time problems for controlled differential inclusions. The sequence of approximating problems is constructed using a finite-difference scheme, i.e., the differential inclusions are replaced with difference inclusions.

  6. Haar wavelet operational matrix method for solving constrained nonlinear quadratic optimal control problem

    NASA Astrophysics Data System (ADS)

    Swaidan, Waleeda; Hussin, Amran

    2015-10-01

    Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.

  7. A global optimization algorithm for simulation-based problems via the extended DIRECT scheme

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang

    2015-11-01

    This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.

  8. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2016-11-07

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  9. A novel geometric approach to binary classification based on scaled convex hulls.

    PubMed

    Liu, Zhenbing; Liu, J G; Pan, Chao; Wang, Guoyou

    2009-07-01

    Geometric methods are very intuitive and provide a theoretical foundation to many optimization problems in the fields of pattern recognition and machine learning. In this brief, the notion of scaled convex hull (SCH) is defined and a set of theoretical results are exploited to support it. These results allow the existing nearest point algorithms to be directly applied to solve both the separable and nonseparable classification problems successfully and efficiently. Then, the popular S-K algorithm has been presented to solve the nonseparable problems in the context of the SCH framework. The theoretical analysis and some experiments show that the proposed method may achieve better performance than the state-of-the-art methods in terms of the number of kernel evaluations and the execution time.

  10. One-class classification based on the convex hull for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Zeng, Ming; Yang, Yu; Luo, Songrong; Cheng, Junsheng

    2016-12-01

    Originating from a nearest point problem, a novel method called one-class classification based on the convex hull (OCCCH) is proposed for one-class classification problems. The basic goal of OCCCH is to find the nearest point to the origin from the reduced convex hull of training samples. A generalized Gilbert algorithm is proposed to solve the nearest point problem. It is a geometric algorithm with high computational efficiency. OCCCH has two different forms, i.e., OCCCH-1 and OCCCH-2. The relationships among OCCCH-1, OCCCH-2 and one-class support vector machine (OCSVM) are investigated theoretically. The classification accuracy and the computational efficiency of the three methods are compared through the experiments conducted on several benchmark datasets. Experimental results show that OCCCH (including OCCCH-1 and OCCCH-2) using the generalized Gilbert algorithm performs more efficiently than OCSVM using the well-known sequential minimal optimization (SMO) algorithm; at the same time, OCCCH-2 can always obtain comparable classification accuracies to OCSVM. Finally, these methods are applied to the monitoring model constructions for bearing fault detection. Compared with OCCCH-2 and OCSVM, OCCCH-1 can significantly decrease the false alarm ratio while detecting the bearing fault successfully.

  11. Convex Diffraction Grating Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Chrisp, Michael P. (Inventor)

    1999-01-01

    A 1:1 Offner mirror system for imaging off-axis objects is modified by replacing a concave spherical primary mirror that is concentric with a convex secondary mirror with two concave spherical mirrors M1 and M2 of the same or different radii positioned with their respective distances d1 and d2 from a concentric convex spherical diffraction grating having its grooves parallel to the entrance slit of the spectrometer which replaces the convex secondary mirror. By adjusting their distances d1 and d2 and their respective angles of reflection alpha and beta, defined as the respective angles between their incident and reflected rays, all aberrations are corrected without the need to increase the spectrometer size for a given entrance slit size to reduce astigmatism, thus allowing the imaging spectrometer volume to be less for a given application than would be possible with conventional imaging spectrometers and still give excellent spatial and spectral imaging of the slit image spectra over the focal plane.

  12. Schur-convexity, Schur-geometric and Schur-harmonic convexity for a composite function of complete symmetric function.

    PubMed

    Shi, Huan-Nan; Zhang, Jing; Ma, Qing-Hua

    2016-01-01

    In this paper, using the properties of Schur-convex function, Schur-geometrically convex function and Schur-harmonically convex function, we provide much simpler proofs of the Schur-convexity, Schur-geometric convexity and Schur-harmonic convexity for a composite function of the complete symmetric function.

  13. Identifying Model Inaccuracies and Solution Uncertainties in Non-Invasive Activation-Based Imaging of Cardiac Excitation using Convex Relaxation

    PubMed Central

    Erem, Burak; van Dam, Peter M.; Brooks, Dana H.

    2014-01-01

    Noninvasive imaging of cardiac electrical function has begun to move towards clinical adoption. Here we consider one common formulation of the problem, in which the goal is to estimate the spatial distribution of electrical activation times during a cardiac cycle. We address the challenge of understanding the robustness and uncertainty of solutions to this formulation. This formulation poses a non-convex, non-linear least squares optimization problem. We show that it can be relaxed to be convex, at the cost of some degree of physiological realism of the solution set, and that this relaxation can be used as a framework to study model inaccuracy and solution uncertainty. We present two examples, one using data from a healthy human subject and the other synthesized with the ECGSIM software package. In the first case, we consider uncertainty in the initial guess and regularization parameter. In the second case, we mimic the presence of an ischemic zone in the heart in a way which violates a model assumption. We show that the convex relaxation allows understanding of spatial distribution of parameter sensitivity in the first case, and identification of model violation in the second. PMID:24710159

  14. Problem Formulation for Optimal Array Modeling and Planning

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.; Ho, Jeannie

    2006-01-01

    In this paper we describe an optimal modeling and planning framework for the future large array of DSN antennas. This framework takes into account the array link performance models, reliability models, constrain models, and objective functions, and determines the optimal sub-array clusters configuration that will support the maximum number of concurrent missions based on mission link properties, antenna element reliabilities, mission requests, and array operation constraints. ..

  15. Newton's method for large bound-constrained optimization problems.

    SciTech Connect

    Lin, C.-J.; More, J. J.; Mathematics and Computer Science

    1999-01-01

    We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.

  16. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  17. Realization of first-order optical systems using thin convex lenses of fixed focal length.

    PubMed

    Yasir, P A Ameen; Ivan, J Solomon

    2014-09-01

    A general axially symmetric first-order optical system is realized using free propagation and thin convex lenses of fixed focal length. It is shown that not more than five convex lenses of fixed focal length are required to realize the most general first-order optical system, with the required number of lenses depending on the situation. The free propagation distances are evaluated explicitly in each situation. The optimality of the decomposition obtained in each situation is brought out. Decompositions for some familiar subgroups of SL2(R) are also worked out. Convex or concave lenses of arbitrary focal length are realized using three or two convex lenses of fixed focal length, respectively. It is further shown that three convex lenses of arbitrary focal length are sufficient to realize the most general first-order optical system.

  18. A faster optimization method based on support vector regression for aerodynamic problems

    NASA Astrophysics Data System (ADS)

    Yang, Xixiang; Zhang, Weihua

    2013-09-01

    In this paper, a new strategy for optimal design of complex aerodynamic configuration with a reasonable low computational effort is proposed. In order to solve the formulated aerodynamic optimization problem with heavy computation complexity, two steps are taken: (1) a sequential approximation method based on support vector regression (SVR) and hybrid cross validation strategy, is proposed to predict aerodynamic coefficients, and thus approximates the objective function and constraint conditions of the originally formulated optimization problem with given limited sample points; (2) a sequential optimization algorithm is proposed to ensure the obtained optimal solution by solving the approximation optimization problem in step (1) is very close to the optimal solution of the originally formulated optimization problem. In the end, we adopt a complex aerodynamic design problem, that is optimal aerodynamic design of a flight vehicle with grid fins, to demonstrate our proposed optimization methods, and numerical results show that better results can be obtained with a significantly lower computational effort than using classical optimization techniques.

  19. Modelling the Pareto-optimal set using B-spline basis functions for continuous multi-objective optimization problems

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Piyush; Dasgupta, Bhaskar; Deb, Kalyanmoy

    2014-07-01

    In the past few years, multi-objective optimization algorithms have been extensively applied in several fields including engineering design problems. A major reason is the advancement of evolutionary multi-objective optimization (EMO) algorithms that are able to find a set of non-dominated points spread on the respective Pareto-optimal front in a single simulation. Besides just finding a set of Pareto-optimal solutions, one is often interested in capturing knowledge about the variation of variable values over the Pareto-optimal front. Recent innovization approaches for knowledge discovery from Pareto-optimal solutions remain as a major activity in this direction. In this article, a different data-fitting approach for continuous parameterization of the Pareto-optimal front is presented. Cubic B-spline basis functions are used for fitting the data returned by an EMO procedure in a continuous variable space. No prior knowledge about the order in the data is assumed. An automatic procedure for detecting gaps in the Pareto-optimal front is also implemented. The algorithm takes points returned by the EMO as input and returns the control points of the B-spline manifold representing the Pareto-optimal set. Results for several standard and engineering, bi-objective and tri-objective optimization problems demonstrate the usefulness of the proposed procedure.

  20. Solving Continuous-Time Optimal-Control Problems with a Spreadsheet.

    ERIC Educational Resources Information Center

    Naevdal, Eric

    2003-01-01

    Explains how optimal control problems can be solved with a spreadsheet, such as Microsoft Excel. Suggests the method can be used by students, teachers, and researchers as a tool to find numerical solutions for optimal control problems. Provides several examples that range from simple to advanced. (JEH)

  1. Improving the efficiency of solving discrete optimization problems: The case of VRP

    NASA Astrophysics Data System (ADS)

    Belov, A.; Slastnikov, S.

    2016-02-01

    Paper is devoted constructing efficient metaheuristics algorithms for discrete optimization problems. Particularly, we consider vehicle routing problem applying original ant colony optimization method to solve it. Besides, some parts of algorithm are separated for parallel computing. Some experimental results are performed to compare the efficiency of these methods.

  2. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  3. Solving Continuous-Time Optimal-Control Problems with a Spreadsheet.

    ERIC Educational Resources Information Center

    Naevdal, Eric

    2003-01-01

    Explains how optimal control problems can be solved with a spreadsheet, such as Microsoft Excel. Suggests the method can be used by students, teachers, and researchers as a tool to find numerical solutions for optimal control problems. Provides several examples that range from simple to advanced. (JEH)

  4. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  5. Finite-horizon optimal investment with transaction costs: A parabolic double obstacle problem

    NASA Astrophysics Data System (ADS)

    Dai, Min; Yi, Fahuai

    This paper concerns optimal investment problem of a CRRA investor who faces proportional transaction costs and finite time horizon. From the angle of stochastic control, it is a singular control problem, whose value function is governed by a time-dependent HJB equation with gradient constraints. We reveal that the problem is equivalent to a parabolic double obstacle problem involving two free boundaries that correspond to the optimal buying and selling policies. This enables us to make use of the well-developed theory of obstacle problem to attack the problem. The C regularity of the value function is proven and the behaviors of the free boundaries are completely characterized.

  6. Optimal birth control of age-dependent competitive species III. Overtaking problem

    NASA Astrophysics Data System (ADS)

    He, Ze-Rong; Cheng, Ji-Shu; Zhang, Chun-Guo

    2008-01-01

    A study is made of an overtaking optimal problem for a population system consisting of two competing species, which is controlled by fertilities. The existence of optimal policy is proved and a maximum principle is carefully derived under less restrictive conditions. Weak and strong turnpike properties of optimal trajectories are established.

  7. Optimizing material properties of composite plates for sound transmission problem

    NASA Astrophysics Data System (ADS)

    Tsai, Yu-Ting; Pawar, S. J.; Huang, Jin H.

    2015-01-01

    To calculate the specific transmission loss (TL) of a composite plate, the conjugate gradient optimization method is utilized to estimate and optimize material properties of the composite plate in this study. For an n-layer composite plate, a nonlinear dynamic stiffness matrix based on the thick plate theory is formulated. To avoid huge computational efforts due to the combination of different composite material plates, a transfer matrix approach is proposed to restrict the dynamic stiffness matrix of the composite plate to a 4×4 matrix. Moreover, the transfer matrix approach has also been used to simplify the complexity of the objective function gradient for the optimization method. Numerical simulations are performed to validate the present algorithm by comparing the TL of the optimal composite plate with that of the original plate. Small number of iterations required during convergence tests illustrates the efficiency of the optimization method. The results indicate that an excellent estimation for the composite plate can be obtained for the desired sound transmission.

  8. An approach to the multi-axis problem in manual control. [optimal pilot model

    NASA Technical Reports Server (NTRS)

    Harrington, W. W.

    1977-01-01

    The multiaxis control problem is addressed within the context of the optimal pilot model. The problem is developed to provide efficient adaptation of the optimal pilot model to complex aircraft systems and real world, multiaxis tasks. This is accomplished by establishing separability of the longitudinal and lateral control problems subject to the constraints of multiaxis attention and control allocation. Control solution adaptation to the constrained single axis attention allocations is provided by an optimal control frequency response algorithm. An algorithm is developed to solve the multiaxis control problem. The algorithm is then applied to an attitude hold task for a bare airframe fighter aircraft case with interesting multiaxis properties.

  9. The effect of model uncertainty on some optimal routing problems

    NASA Technical Reports Server (NTRS)

    Mohanty, Bibhu; Cassandras, Christos G.

    1991-01-01

    The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.

  10. Designing convex repulsive pair potentials that favor assembly of kagome and snub square lattices

    NASA Astrophysics Data System (ADS)

    Piñeros, William D.; Baldea, Michael; Truskett, Thomas M.

    2016-08-01

    Building on a recently introduced inverse strategy, isotropic and convex repulsive pair potentials were designed that favor assembly of particles into kagome and equilateral snub square lattices. The former interactions were obtained by a numerical solution of a variational problem that maximizes the range of density for which the ground state of the potential is the kagome lattice. Similar optimizations targeting the snub square lattice were also carried out, employing a constraint that required a minimum chemical potential advantage of the target over select competing structures. This constraint helped to discover isotropic interactions that meaningfully favored the snub square lattice as the ground state structure despite the asymmetric spatial distribution of particles in its coordination shells and the presence of tightly competing structures. Consistent with earlier published results [W. Piñeros et al., J. Chem. Phys. 144, 084502 (2016)], enforcement of greater chemical potential advantages for the target lattice in the interaction optimization led to assemblies with enhanced thermal stability.

  11. Designing convex repulsive pair potentials that favor assembly of kagome and snub square lattices.

    PubMed

    Piñeros, William D; Baldea, Michael; Truskett, Thomas M

    2016-08-07

    Building on a recently introduced inverse strategy, isotropic and convex repulsive pair potentials were designed that favor assembly of particles into kagome and equilateral snub square lattices. The former interactions were obtained by a numerical solution of a variational problem that maximizes the range of density for which the ground state of the potential is the kagome lattice. Similar optimizations targeting the snub square lattice were also carried out, employing a constraint that required a minimum chemical potential advantage of the target over select competing structures. This constraint helped to discover isotropic interactions that meaningfully favored the snub square lattice as the ground state structure despite the asymmetric spatial distribution of particles in its coordination shells and the presence of tightly competing structures. Consistent with earlier published results [W. Piñeros et al., J. Chem. Phys. 144, 084502 (2016)], enforcement of greater chemical potential advantages for the target lattice in the interaction optimization led to assemblies with enhanced thermal stability.

  12. A Bayesian observer replicates convexity context effects in figure-ground perception.

    PubMed

    Goldreich, Daniel; Peterson, Mary A

    2012-01-01

    Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.

  13. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  14. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  15. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  16. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  17. Optimization instances for deterministic and stochastic problems on energy efficient investments planning at the building level.

    PubMed

    Cano, Emilio L; Moguerza, Javier M; Alonso-Ayuso, Antonio

    2015-12-01

    Optimization instances relate to the input and output data stemming from optimization problems in general. Typically, an optimization problem consists of an objective function to be optimized (either minimized or maximized) and a set of constraints. Thus, objective and constraints are jointly a set of equations in the optimization model. Such equations are a combination of decision variables and known parameters, which are usually related to a set domain. When this combination is a linear combination, we are facing a classical Linear Programming (LP) problem. An optimization instance is related to an optimization model. We refer to that model as the Symbolic Model Specification (SMS) containing all the sets, variables, and parameters symbols and relations. Thus, a whole instance is composed by the SMS, the elements in each set, the data values for all the parameters, and, eventually, the optimal decisions resulting from the optimization solution. This data article contains several optimization instances from a real-world optimization problem relating to investment planning on energy efficient technologies at the building level.

  18. Optimization instances for deterministic and stochastic problems on energy efficient investments planning at the building level

    PubMed Central

    Cano, Emilio L.; Moguerza, Javier M.; Alonso-Ayuso, Antonio

    2015-01-01

    Optimization instances relate to the input and output data stemming from optimization problems in general. Typically, an optimization problem consists of an objective function to be optimized (either minimized or maximized) and a set of constraints. Thus, objective and constraints are jointly a set of equations in the optimization model. Such equations are a combination of decision variables and known parameters, which are usually related to a set domain. When this combination is a linear combination, we are facing a classical Linear Programming (LP) problem. An optimization instance is related to an optimization model. We refer to that model as the Symbolic Model Specification (SMS) containing all the sets, variables, and parameters symbols and relations. Thus, a whole instance is composed by the SMS, the elements in each set, the data values for all the parameters, and, eventually, the optimal decisions resulting from the optimization solution. This data article contains several optimization instances from a real-world optimization problem relating to investment planning on energy efficient technologies at the building level. PMID:26693515

  19. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  20. A Convex Atomic-Norm Approach to Multiple Sequence Alignment and Motif Discovery

    PubMed Central

    Yen, Ian E. H.; Lin, Xin; Zhang, Jiong; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    Multiple Sequence Alignment and Motif Discovery, known as NP-hard problems, are two fundamental tasks in Bioinformatics. Existing approaches to these two problems are based on either local search methods such as Expectation Maximization (EM), Gibbs Sampling or greedy heuristic methods. In this work, we develop a convex relaxation approach to both problems based on the recent concept of atomic norm and develop a new algorithm, termed Greedy Direction Method of Multiplier, for solving the convex relaxation with two convex atomic constraints. Experiments show that our convex relaxation approach produces solutions of higher quality than those standard tools widely-used in Bioinformatics community on the Multiple Sequence Alignment and Motif Discovery problems. PMID:27559428