Sample records for convex optimization algorithms

  1. Optimal Stochastic Approximation Algorithms for Strongly Convex ...

    E-print Network

    2012-06-18

    Jul 1, 2010 ... then SA algorithms became widely used in stochastic optimization (see, e.g., ... where X is a closed convex set in Euclidean space E, X(x) is a simple ..... Proof. The definition of u? and the fact V (˜x,·) is a differentiable convex ...

  2. An Optimal Algorithm for Intersecting Three-Dimensional Convex Polyhedra

    Microsoft Academic Search

    Bernard Chazelle

    1992-01-01

    This paper describes a linear-time algorithm for computing the intersection of two convex polyhedra in 3-space. Applications of this result to computing intersections, convex hulls, and Voronoi diagrams are also given.

  3. An optimal real-time algorithm for planar convex hulls

    Microsoft Academic Search

    Franco P. Preparata

    1979-01-01

    An algorithm is described for the construction in real-time of the convex hull of a set of n points in the plane. Using an appropriate data structure, the algorithm constructs the convex hull by successive updates, each taking time O(log n), thereby achieving a total processing time O(n log n).

  4. Nonsmooth and nonconvex structural analysis algorithms based on difference convex optimization techniques

    Microsoft Academic Search

    G. E. Stavroulakis; L. N. Polyakova

    1996-01-01

    The impact of difference convex optimization techniques on structural analysis algorithms for nonsmooth and non-convex problems is investigated in this paper. Algorithms for the numerical solutions are proposed and studied. The relation to more general optimization techniques and to computational mechanics algorithms is also discussed. The theory is illustrated by a composite beam delamination example.

  5. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  6. Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication

    Microsoft Academic Search

    Dusan Jakovetic; João Xavier; José M. F. Moura

    2011-01-01

    We study distributed optimization in networked sys- tems, where nodes cooperate to find the optimal quantity of common interest, . The objective function of the cor- responding optimization problem is the sum of private (known only by a node), convex, nodes' objectives and each node im- poses a private convex constraint on the allowed values of .W e solve this

  7. A cutting surface algorithm for semi-infinite convex programming ...

    E-print Network

    2013-06-16

    Jun 16, 2013 ... convex optimization problems, and use it to develop an algorithm for ...... is an (N + 1)-dimensional convex set contained in the convex hull of the points ..... built, which requires considerably more planar cuts han surface cuts.

  8. Convex Programming for Disjunctive Convex Optimization

    Microsoft Academic Search

    Sebastián Ceria

    1998-01-01

    Given a finite number of closed convex sets whose algebraic representationis known, we study the problem of optimizing a convex functionover the closure of the convex hull of the union of these sets.We derive an algebraic characterization of the feasible region in ahigher-dimensional space and propose a solution procedure akin tothe interior-point approach for convex programming.Research partly supported by NSF

  9. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  10. Advances in Convex Optimization

    Microsoft Academic Search

    S. Boyd; L. Vandenberghe; M. Grant

    2006-01-01

    In this talk I will give an overview of general convex optimization, which can be thought of as an extension of linear programming, and some recently developed subfamilies such as second-order cone, semidefinite, and geometric programming. Like linear programming, we have a fairly complete duality theory, and very effective numerical methods for these problem classes; in addition, recently developed software

  11. Convex hull properties and algorithms

    Microsoft Academic Search

    Xianquan Zhang; Zhenjun Tang; Jinhui Yu; Mingming Guo; Lianyuan Jiang

    2010-01-01

    Convex hull (CH) is widely used in computer graphic, image processing, CAD\\/CAM, and pattern recognition. We investigate CH properties and derive new properties: (1) CH vertices’ coordinates monotonically increase or decrease, (2) The edge slopes monotonically decrease. Using these properties, we proposed two algorithms, i.e., CH algorithm for planar point set, and CH algorithm for two available CHs. The main

  12. Subspace identification via convex optimization

    E-print Network

    Saunderson, James (James Francis)

    2011-01-01

    In this thesis we consider convex optimization-based approaches to the classical problem of identifying a subspace from noisy measurements of a random process taking values in the subspace. We focus on the case where the ...

  13. Privacy constraints in regularized convex optimization

    E-print Network

    Chaudhuri, Kamalika

    2009-01-01

    Privacy concerns are becoming more important as more personal data moves online. It is therefore of interest to develop versions of data-processing algorithms which can be guaranteed to preserve the privacy of individuals' data. A general method for creating a privacy-preserving version of a convex optimization problem is described, along with applications to machine learning, statistics, and resource allocation problems.

  14. Convex Optimization: from Real-Time Embedded

    E-print Network

    Hall, Julian

    Convex Optimization: from Real-Time Embedded to Large-Scale Distributed Stephen Boyd Neal Parikh of Edinburgh, June 25 2014 1 #12;Outline Convex Optimization Real-Time Embedded Optimization Large-Scale Distributed Optimization Summary 2 #12;Outline Convex Optimization Real-Time Embedded Optimization Large

  15. A new fuzzy adaptive hybrid particle swarm optimization algorithm for non-linear, non-smooth and non-convex economic dispatch problem

    Microsoft Academic Search

    Taher Niknam

    2010-01-01

    Economic dispatch (ED) plays an important role in power system operation. ED problem is a non-smooth and non-convex problem when valve-point effects of generation units are taken into account. This paper presents an efficient hybrid evolutionary approach for solving the ED problem considering the valve-point effect. The proposed algorithm combines a fuzzy adaptive particle swarm optimization (FAPSO) algorithm with Nelder–Mead

  16. Privacy Preserving Online Convex Optimization

    E-print Network

    Jain, Prateek; Thakurta, Abhradeep

    2011-01-01

    In this paper, we consider the problem of preserving privacy for online convex programming (OCP), an important online learning paradigm. We use the notion of differential privacy as our privacy measure. For this problem, we distill two critical attributes a private OCP algorithm should have, namely, linearly decreasing sensitivity and sub-linear regret bound. Assuming these two conditions, we provide a general framework for OCP that preserves privacy while guaranteeing sub-linear regret bound. We then analyze Implicit Gradient Descent (IGD) algorithm for OCP in our framework, and show $\\tilde O(\\sqrt{T})$ regret bound while preserving differential privacy for Lipschitz continuous, strongly convex cost functions. We also analyze the Generalized Infinitesimal Gradient Ascent (GIGA) method, a popular OCP algorithm, in our privacy preserving framework to obtain $\\tilde O(\\sqrt{T})$ regret bound, albeit for a slightly more restricted class of strongly convex functions with Lipschitz continuous gradient. We then co...

  17. Convex programming for disjunctive convex optimization

    Microsoft Academic Search

    Sebastián Ceria; João Soares

    1999-01-01

    .   Given a finite number of closed convex sets whose algebraic representation is known, we study the problem of finding the minimum\\u000a of a convex function on the closure of the convex hull of the union of those sets. We derive an algebraic characterization\\u000a of the feasible region in a higher-dimensional space and propose a solution procedure akin to the

  18. Convex optimization methods for model reduction

    E-print Network

    Sou, Kin Cheong, 1979-

    2008-01-01

    Model reduction and convex optimization are prevalent in science and engineering applications. In this thesis, convex optimization solution techniques to three different model reduction problems are studied.Parameterized ...

  19. August 2000 (Convex Optimization ) JP Goux The mega title ...

    E-print Network

    An interior point cutting plane method for convex feasibility problem with ..... A Fast Algorithm for Edge-Preserving Variational Multichannel Image Restoration ..... Problem Formulations for Simulation-based Design Optimization using Statistical ...

  20. Kernel regression for travel time estimation via convex optimization

    E-print Network

    Kernel regression for travel time estimation via convex optimization Sébastien Blandin , Laurent El Ghaoui and Alexandre Bayen Abstract--We develop an algorithm aimed at estimating travel time on segments of a road network using a convex optimiza- tion framework. Sampled travel time from probe vehicles

  1. Distributionally Robust Convex Optimization

    E-print Network

    2013-09-22

    Finally, there are deep and insightful connections between classical robust optimization, distributionally ...... The right graph reports the performance of the corresponding .... Statistical Inference. Duxbury Thomson Learning, 2nd edition, 2002.

  2. Convex optimization techniques in system identification

    E-print Network

    Vandenberghe, Lieven

    ]. The 1-norm and nuclear norm techniques can be extended in several interesting ways. The two typesConvex optimization techniques in system identification Lieven Vandenberghe Electrical: In recent years there has been growing interest in convex optimization techniques for system identification

  3. Time and VLSI-Optimal Convex Hull Computation on Meshes with Multiple Broadcasting

    Microsoft Academic Search

    Venkatavasu Bokka; Himabindu Gurla; Stephan Olariu; James L. Schwing

    1995-01-01

    Computing the convex hull of a planar set of points is one of the most extensivelyinvestigated topics in computational geometry. Our main contribution is to present thefirst known general-case, time- and VLSI-optimal, algorithm for convex hull computationon meshes with multiple broadcasting. Specifically, we show that for every choiceof a positive integer constant c, the convex hull of a set of

  4. Joint Equalization and Decoding via Convex Optimization 

    E-print Network

    Kim, Byung Hak

    2012-07-16

    The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC...

  5. Optimization over the Pareto Outcome set associated with a Convex ...

    E-print Network

    2014-02-01

    Keywords Optimization over the Pareto image set · Multi-objective Stochastic ... which is generally not explicitly given and not convex (even in the linear case). ... Else, a new efficient point is generated and one triangle is reduced by two new ...... approximation algorithm for generating all efficient extreme points in the out-.

  6. Active Learning as Non-Convex Optimization Andrew Guillory

    E-print Network

    Noble, William Stafford

    @ee.washington.edu Abstract We propose a new view of active learning algo- rithms as optimization. We show that many on- line active learning algo- rithm and non-convex losses. Finally, we discuss and show empirically how viewing- tain active learning algorithms achieve better generalization error than passive learning algo- rithms

  7. GLOBAL OPTIMIZATION IN COMPUTER VISION: CONVEXITY, CUTS AND

    E-print Network

    Lunds Universitet

    GLOBAL OPTIMIZATION IN COMPUTER VISION: CONVEXITY, CUTS AND APPROXIMATION ALGORITHMS CARL OLSSON in computer vision. Numerous prob- lems in this field as well as in image analysis and other branches. International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 2007. · C. Olsson, F. Kahl, R

  8. SEGMENTATION OF IMAGES WITH SEPARATING LAYERS BY FUZZY C-MEANS AND CONVEX OPTIMIZATION

    E-print Network

    Steidl, Gabriele

    SEGMENTATION OF IMAGES WITH SEPARATING LAYERS BY FUZZY C-MEANS AND CONVEX OPTIMIZATION B. SHAFEI-dimensional images containing separated layers. We tackle this problem by combining the fuzzy c-means algorithm of the convex model. We compute the segment prototypes by the fuzzy c-means algorithm (FCM). Then we use

  9. Advances in dual algorithms and convex approximation methods

    NASA Technical Reports Server (NTRS)

    Smaoui, H.; Fleury, C.; Schmit, L. A.

    1988-01-01

    A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.

  10. 1 Automatic Code Generation for Real-Time Convex Optimization

    E-print Network

    Press, 2009. This chapter concerns the use of convex optimization in real-time embedded systems problems. In real-time embedded convex optimization the same optimization problem is solved many times generation system for real-time embedded convex optimization. Such a system scans a description

  11. New Algorithms for the Dual of the Convex Cost Network Flow Problem with Application to Computer Vision

    Microsoft Academic Search

    Vladimir Kolmogorov; Akiyoshi Shioura

    Motivated by various applications to computer vision, we consider an integer convex optimization problem which is the dual of the convex cost network ?ow problem. In this paper, we flrst propose a new primal algorithm for computing an optimal solution of the problem. Our primal algorithm iteratively updates primal variables by solving associated minimum cut problems. The main contribution in

  12. A randomized scheme for speeding up algorithms for linear and convex programming problems with high constraints-to-variables ratio

    Microsoft Academic Search

    Ilan Adler; Ron Shamir

    1993-01-01

    We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many more constraints than variables. In particular, we give a randomized algorithm for solving convex quadratic and linear programs, which uses that scheme together with a variant of Karmarkar's

  13. Convex Analysis and Optimization with Submodular Functions: a Tutorial

    E-print Network

    Boyer, Edmond

    Convex Analysis and Optimization with Submodular Functions: a Tutorial Francis Bach INRIA - Willow . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 10.2 Cut functions . . . . . . . . . . . . . . . . . .

  14. CONVEX MATROID OPTIMIZATION SIAM J. DISCRETE MATH. c 2003 Society for Industrial and Applied Mathematics

    E-print Network

    Onn, Shmuel

    CONVEX MATROID OPTIMIZATION SHMUEL ONN SIAM J. DISCRETE MATH. c 2003 Society for Industrial functionals over matroid bases. It is richly expressive and captures certain quadratic assignment, matroid, greedy algorithm, quadratic assignment, polytope, polynomial time, strongly polynomial time AMS

  15. Supporting Global Numerical Optimization of Rational Functions by Generic Symbolic Convexity Tests

    NASA Astrophysics Data System (ADS)

    Neun, Winfried; Sturm, Thomas; Vigerske, Stefan

    Convexity is an important property in nonlinear optimization since it allows to apply efficient local methods for finding global solutions. We propose to apply symbolic methods to prove or disprove convexity of rational functions over a polyhedral domain. Our algorithms reduce convexity questions to real quantifier elimination problems. Our methods are implemented and publicly available in the open source computer algebra system Reduce. Our long term goal is to integrate Reduce as a "workhorse" for symbolic computations into a numerical solver.

  16. Real-Time Convex Optimization in Signal Processing

    Microsoft Academic Search

    Jacob Mattingley; Stephen Boyd

    2010-01-01

    This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard

  17. Convex Duality in Constrained Portfolio Optimization

    Microsoft Academic Search

    Jaksa Cvitanic; Ioannis Karatzas

    1992-01-01

    We study the stochastic control problem of maximizing expected utility from terminal wealth and\\/or consumption, when the portfolio is constrained to take values in a given closed, convex subset of $\\\\mathscr{R}^d$. The setting is that of a continuous-time, Ito process model for the underlying asset prices. General existence results are established for optimal portfolio\\/consumption strategies, by suitably embedding the constrained

  18. An approximation algorithm for convex multiplicative programming problems

    Microsoft Academic Search

    Lizhen Shao; Matthias Ehrgott

    2011-01-01

    Multiplicative programming problems are difficult global optimization problems known to be NP-hard. In this paper we propose a method for approximately solving convex multiplicative programming problems. This work is based on our previous work \\

  19. Portfolio Optimization under Convex Incentive Schemes

    E-print Network

    Bichuch, Maxim

    2011-01-01

    We consider the utility maximization problem of terminal wealth from the point of view of a portfolio manager paid by an incentive scheme given as a convex function $g$ of the terminal wealth. The manager's own utility function $U$ is assumed to be smooth and strictly concave, however the resulting utility function $U \\circ g$ fails to be concave. As a consequence, this problem does not fit into the classical portfolio optimization theory. Using duality theory, we prove wealth-independent existence and uniqueness of the optimal wealth in general (incomplete) semimartingale markets as long as the unique optimizer of the dual problem has no atom with respect to the Lebesgue measure. In many cases, this fact is independent of the incentive scheme and depends only on the structure of the set of equivalent local martingale measures. As example we discuss stochastic volatility models and show that existence and uniqueness of an optimizer are guaranteed as long as the market price of risk satisfies a certain (Mallia...

  20. A capacity scaling algorithm for convex cost submodular flows

    SciTech Connect

    Iwata, Satoru [Kyoto Univ. (Japan)

    1996-12-31

    This paper presents a scaling scheme for submodular functions. A small but strictly submodular function is added before scaling so that the resulting functions should be submodular. This scaling scheme leads to a weakly polynomial algorithm to solve minimum cost integral submodular flow problems with separable convex cost functions, provided that an oracle for exchange capacities are available.

  1. Optimization Algorithms in Machine Learning Stephen Wright

    E-print Network

    Wright, Steve

    /µ. Sometimes discuss convex quadratic f : f (x) = 1 2 xT Ax, where µI A LI. Stephen Wright (UWOptimization Algorithms in Machine Learning Stephen Wright University of Wisconsin-Madison NIPS Tutorial, 6 Dec 2010 Stephen Wright (UW-Madison) Optimization in Machine Learning NIPS Tutorial, 6 Dec 2010

  2. A Survey of Algorithms for Convex Multicommodity Flow Problems

    Microsoft Academic Search

    A. Ouorou; P. Mahey; J.-Ph. Vial

    2000-01-01

    Routing problems appear frequently when dealing with the operation of communication or transportation networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodity flow problems. This paper emphasizes the message

  3. Pulse Construction in OFDM Systems via convex optimization

    E-print Network

    Strohmer, Thomas

    ], a family of intersymbol-interference-free polynomial pulses (POLY) was designed, whose Fourier transform1 Pulse Construction in OFDM Systems via convex optimization Jiadong Xu and Thomas Strohmer systems, we set up an optimization problem to find the transmission pulse which maximizes the signal

  4. Design of PI Controllers based on Non-Convex Optimization

    Microsoft Academic Search

    K. J. ÅSTRÖM; H. PANAGOPOULOS; T. HÄGGLUND

    1998-01-01

    This paper presents an efficient numerical method for designing PI controllers. The design is based on optimization of load disturbance rejection with constraints on sensitivity and weighting of set point response. Thus, the formulation of the design problem captures three essential aspects of industrial control problems, leading to a non-convex optimization problem. Efficient ways to solve the problem are presented.

  5. Adaptive Algorithms for Planar Convex Hull Problems

    Microsoft Academic Search

    Hee-Kap Ahn; Yoshio Okamoto

    2010-01-01

    We study problems in computational geometry from the viewpoint of adaptive algorithms. Adaptive algorithms have been extensively studied for the sorting problem, and in this paper we gener- alize the framework to geometric problems. To this end, we think of geometric problems as permutation (or rearranging) problems of arrays, and dene the \\\\presortedness\\

  6. A VU-algorithm for convex minimization Robert Mi in and Claudia Sagastiz abal y

    E-print Network

    Mifflin, Robert

    A VU-algorithm for convex minimization Robert Mi#15;in #3; and Claudia Sagastiz#19;abal y revised April 18, 2005 Abstract For convex minimization we introduce an algorithm based on VU enough. Keywords Convex minimization, proximal points, bundle methods, VU-decomposition, superlinear

  7. An improved cellular automata based algorithm for the 45-convex hull problem

    E-print Network

    Graham, Nick

    . 1 #12;1 Introduction Given a set of planar points, the two-dimensional convex hull problemAn improved cellular automata based algorithm for the 45-convex hull problem Adam Clarridge and Kai,ksalomaa}@cs.queensu.ca July 2008 Abstract We give a cellular automaton algorithm for solving a version of the convex hull

  8. An improved cellular automata based algorithm for the 45convex hull problem

    E-print Network

    Graham, Nick

    . 1 #12; 1 Introduction Given a set of planar points, the two­dimensional convex hull problemAn improved cellular automata based algorithm for the 45­convex hull problem Adam Clarridge and Kai,ksalomaa}@cs.queensu.ca July 2008 Abstract We give a cellular automaton algorithm for solving a version of the convex hull

  9. Modeling flow statistics using convex optimization

    E-print Network

    Hoepffner, Jérôme

    = ExxH M = EwwH At steady state, Lyapunov equation: AP + PAH + M = 0 A : Dynamic operator P : State covariance of disturbances #12;The Lyapunov cone P and M are covariance matrices P 0, M 0, AP + PAH 0 The operator A generates a convex cone. Lyapunov theorem: M 0, !P 0/AP + PAH + M = 0 but P 0/AP + PAH

  10. Global convergence of the Heavy-ball method for convex optimization

    E-print Network

    2014-12-20

    Department of Automatic Control, School of Electrical Engineering and ACCESS Linnaeus Center, Royal. Institute of ... convex optimization methods are still open [6]. The basic ...... convex optimization,” Technical Report, 2014. [Online].

  11. A new algorithm for computing the convex hull of a planar point set

    Microsoft Academic Search

    Guang-hui Liu; Chuan-bo Chen

    2007-01-01

    When the edges of a convex polygon are traversed along one direction, the interior of the convex polygon is always on the\\u000a same side of the edges. Based on this characteristic of convex polygons, a new algorithm for computing the convex hull of\\u000a a simple polygon is proposed in this paper, which is then extended to a new algorithm for

  12. Convexity-Oriented Method for the Topology Optimization of Ferromagnetic Moving Parts in Electromagnetic Actuators Using Magnetic Energy

    Microsoft Academic Search

    Thibaut Labbe; Bruno Dehez

    2010-01-01

    When applied to the topology optimization of electromagnetic devices, gradient-based algorithms suffer from a lack of convexity. They usually converge to local minimizers and the obtained designs depend on the initial material distributions. This paper focuses on avoiding these local minimizers for the optimization of ferromagnetic moving parts in electromagnetic actuators. The proposed method intends to maximize the average reluctant

  13. Optimal convex error estimators for classification

    Microsoft Academic Search

    Chao Sima; Edward R. Dougherty

    A cross-validation error estimator is obtained by repeatedly leaving out some data points, deriving classifiers on the remaining points, computing errors for these classifiers on the left-out points, and then averaging these errors. The 0.632 bootstrap estimator is obtained by averaging the errors of classifiers designed from points drawn with replacement and then taking a convex combination of this \\

  14. Solving quadratically constrained convex optimization problems with an interior-point method

    Microsoft Academic Search

    Csaba Mészáros

    2011-01-01

    The paper describes the design of our interior-point implementation to solve large-scale quadratically constrained convex optimization problems. We outline the details of the implemented algorithm, which is based on the primal-dual interior-point method. Our discussion includes topics related to starting point strategies and to the implementation of the numerical algebra employed, with emphasis on sparsity and stability issues. Computational results

  15. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  16. The Convergence Rate of the Sandwich Algorithm for Approximating Convex Functions

    Microsoft Academic Search

    Günter Rote

    1992-01-01

    The Sandwich algorithm approximates a convex function of one variable over an interval by evaluating the function and its derivative at a sequence of points. The connection of the obtained points is a piecewise linear upper approximation, and the tangents yield a piecewise linear lower approximation. Similarly, a planar convex figure can be approximated by convex polygons. Different versions of

  17. Star Splaying: An Algorithm for Repairing Delaunay Triangulations and Convex Hulls

    E-print Network

    California at Berkeley, University of

    Star Splaying: An Algorithm for Repairing Delaunay Triangulations and Convex Hulls Jonathan Richard a triangulation or an approximation of a convex hull, and produces the Delaunay triangulation, weighted Delaunay triangulation, or convex hull of the vertices in the input. If the input is "nearly De- launay" or "nearly

  18. First and second order convex approximation strategies in structural optimization

    NASA Technical Reports Server (NTRS)

    Fleury, C.

    1989-01-01

    In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.

  19. sensitivity analysis in convex quadratic optimization: invariant ...

    E-print Network

    Aug 20, 2004 ... Keywords: Parametric optimization; Sensitivity analysis; Quadratic opti- ... when the problem has multiple optimal (and thus degenerate) solutions, ...... present, and future, Computers and Chemical Engineering, 23, 667–. 682.

  20. A convex programming framework for optimal and bounded suboptimal well field management

    NASA Astrophysics Data System (ADS)

    Dorini, G. F.; Thordarson, F. Ã.-.; Bauer-Gottwein, P.; Madsen, H.; Rosbjerg, D.; Madsen, H.

    2012-06-01

    This paper presents a groundwater management model, considering the interaction between a confined aquifer and an unlooped Water Distribution Network (WDN), conveying the groundwater into the Water Works distribution mains. The pumps are controlled by regulating the characteristic curves. The objective of the management is to minimize the total cost of pump operations over a multistep time horizon, while fulfilling a set of time-varying management constraints. Optimization in groundwater management and pressurized WDNs have been widely investigated in the literature. Problem formulations are often convex, hence global optimality can be attained by a wealth of algorithms. Among these, the Interior Point methods are extensively employed for practical applications, as they are capable of efficiently solving large-scale problems. Despite this, management models explicitly embedding both systems without simplifications are rare, and they usually involve heuristic techniques. The main limitation with heuristics is that neither optimality nor suboptimality bounds can be guarantee. This paper extends the proof of convexity to mixed management models, enabling the use of Interior Point techniques to compute globally optimal management solutions. If convexity is not achieved, it is shown how suboptimal solutions can be computed, and how to bind their deviation from the optimality. Experimental results obtained by testing the methodology in a well field located nearby Copenhagen (DK), show that management solutions can consistently perform within the 99.9% of the true optimum. Furthermore it is shown how not considering the Water Distribution Network in optimization is likely to result in unfeasible management solutions.

  1. Ordered subsets convex algorithm for 3D terahertz transmission tomography.

    PubMed

    Recur, B; Balacey, H; Bou Sleiman, J; Perraud, J B; Guillet, J-P; Kingston, A; Mounaix, P

    2014-09-22

    We investigate in this paper a new reconstruction method in order to perform 3D Terahertz (THz) tomography using a continuous wave acquisition setup in transmission mode. This method is based on the Maximum Likelihood for TRansmission tomography (ML-TR) first developed for X-ray imaging. We optimize the Ordered Subsets Convex (OSC) implementation of the ML-TR by including the Gaussian propagation model of THz waves and take into account the intensity distributions of both blank calibration scan and dark-field measured on THz detectors. THz ML-TR reconstruction quality and accuracy are discussed and compared to other tomographic reconstructions. PMID:25321798

  2. Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

    E-print Network

    Girard, Julien N; Starck, Jean Luc; Corbel, Stéphane; Woiselle, Arnaud; Tasse, Cyril; McKean, John P; Bobin, Jérôme

    2015-01-01

    Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ~2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emissions as compared to CLEAN. With the advent of large radiotel...

  3. On the Complexity of Optimization Problems for 3Dimensional Convex Polyhedra and Decision Trees \\Lambda

    E-print Network

    Goodrich, Michael T.

    for realizing a planar 3­connected triangulation as a convex polyhedron, which may be of independent interest. Key words: Convex polyhedra, approximation, Steinitz's theorem, planar graphs, art gallery theorems]). They are the product of convex hull algorithms, and are key components for problems in robot motion planning

  4. A Deterministic Analysis of an Online Convex Mixture of Experts Algorithm.

    PubMed

    Ozkan, Huseyin; Donmez, Mehmet A; Tunc, Sait; Kozat, Suleyman S

    2015-07-01

    We analyze an online learning algorithm that adaptively combines outputs of two constituent algorithms (or the experts) running in parallel to estimate an unknown desired signal. This online learning algorithm is shown to achieve and in some cases outperform the mean-square error (MSE) performance of the best constituent algorithm in the steady state. However, the MSE analysis of this algorithm in the literature uses approximations and relies on statistical models on the underlying signals. Hence, such an analysis may not be useful or valid for signals generated by various real-life systems that show high degrees of nonstationarity, limit cycles and that are even chaotic in many cases. In this brief, we produce results in an individual sequence manner. In particular, we relate the time-accumulated squared estimation error of this online algorithm at any time over any interval to the one of the optimal convex mixture of the constituent algorithms directly tuned to the underlying signal in a deterministic sense without any statistical assumptions. In this sense, our analysis provides the transient, steady-state, and tracking behavior of this algorithm in a strong sense without any approximations in the derivations or statistical assumptions on the underlying signals such that our results are guaranteed to hold. We illustrate the introduced results through examples. PMID:25167557

  5. Convex Optimization-based Beamforming: From Receive to Transmit and Network Designs

    Microsoft Academic Search

    Alex B. Gershman; Nicholas D. Sidiropoulos; Shahram Shahbazpanahi; Mats Bengtsson

    In this article, an overview of advanced convex optimization approaches to multi-sensor beamforming is pre- sented, and connections are drawn between different types of optimization-based beamformers that apply to a broad class of receive, transmit, and network beamformer design problems. It is demonstrated that convex optimization provides an indispensable set of tools for beamforming, enabling rigorous formulation and effective solution

  6. Convex dynamics: Unavoidable difficulties in bounding some greedy algorithms

    NASA Astrophysics Data System (ADS)

    Nowicki, Tomasz; Tresser, Charles

    2004-03-01

    A greedy algorithm for scheduling and digital printing with inputs in a convex polytope, and vertices of this polytope as successive outputs, has recently been proven to be bounded for any convex polytope in any dimension. This boundedness property follows readily from the existence of some invariant region for a dynamical system equivalent to the algorithm, which is what one proves. While the proof, and some constructions of invariant regions that can be made to depend on a single parameter, are reasonably simple for convex polygons in the plane, the proof of boundedness gets quite complicated in dimension three and above. We show here that such complexity is somehow justified by proving that the most natural generalization of the construction that works for polygons does not work in any dimension above two, even if we allow for as many parameters as there are faces. We first prove that some polytopes in dimension greater than two admit no invariant region to which they are combinatorially equivalent. We then modify these examples to get polytopes such that no invariant region can be obtained by pushing out the borders of the half spaces that intersect to form the polytope. We also show that another mechanism prevents some simplices (the simplest polytopes in any dimension) from admitting invariant regions to which they would be similar. By contrast in dimension two, one can always get an invariant region by pushing these borders far enough in some correlated way; for instance, pushing all borders by the same distance builds an invariant region for any polygon if the push is at a distance big enough for that polygon. To motivate the examples that we provide, we discuss briefly the bifurcations of polyhedra associated with pushing half spaces in parallel to themselves. In dimension three, the elementary codimension one bifurcation resembles the unfolding of the elementary degenerate singularity for codimension one foliations on surfaces. As the subject of this paper is new for the communities most interested in Chaos, we take some care in describing various links of our problem to classical issues (in particular linked to Diophantine approximation) as well as to various technological or commercial issues, exemplified, respectively, by digital printing and a problem in scheduling.

  7. Firefly Algorithms for Multimodal Optimization

    E-print Network

    Yang, Xin-She

    2010-01-01

    Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research.

  8. Optimal static-dynamic hedges for exotic options under convex risk measures

    Microsoft Academic Search

    Aytaç ?lhan; Mattias Jonsson; Ronnie Sircar

    2009-01-01

    We study the problem of optimally hedging exotic derivatives positions using a combination of dynamic trading strategies in underlying stocks and static positions in vanilla options when the performance is quantified by a convex risk measure. We establish conditions for the existence of an optimal static position for general convex risk measures, and then analyze in detail the case of

  9. On an Extension of Condition Number Theory to Non-Conic Convex Optimization

    E-print Network

    Ordóñez, Fernando

    On an Extension of Condition Number Theory to Non-Conic Convex Optimization Robert M. Freund, the modern theory of condition numbers for conic convex optimization: z := minx ctx s.t. Ax - b CY x CX extend the modern theory of condition numbers to the problem format (GPd). As a byproduct, we are able

  10. Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic Algorithms

    E-print Network

    Kjellström, Hedvig

    Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic Algorithms Foundations Algorithm Components Numerical Optimization Genetic Programming 1 Foundations 2 Algorithm Programming Example Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic

  11. Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic Algorithms

    E-print Network

    Kjellström, Hedvig

    Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic Algorithms #12;Foundations Algorithm Components Numerical Optimization Genetic Programming 1 Foundations 2 Algorithm Programming Example #12;Foundations Algorithm Components Numerical Optimization Genetic Programming Genetic

  12. libCreme: An optimization library for evaluating convex-roof entanglement measures

    E-print Network

    Beat Röthlisberger; Jörg Lehmann; Daniel Loss

    2011-07-22

    We present the software library libCreme which we have previously used to successfully calculate convex-roof entanglement measures of mixed quantum states appearing in realistic physical systems. Evaluating the amount of entanglement in such states is in general a non-trivial task requiring to solve a highly non-linear complex optimization problem. The algorithms provided here are able to achieve to do this for a large and important class of entanglement measures. The library is mostly written in the Matlab programming language, but is fully compatible to the free and open-source Octave platform. Some inefficient subroutines are written in C/C++ for better performance. This manuscript discusses the most important theoretical concepts and workings of the algorithms, focussing on the actual implementation and usage within the library. Detailed examples in the end should make it easy for the user to apply libCreme to specific problems.

  13. Optimal bit allocation via the generalized BFOS algorithm

    Microsoft Academic Search

    Eve A. Riskin

    1991-01-01

    We analyze the use of the generalized Breiman, Friedman, Olshen, andStone (BFOS) algorithm, a recently developed technique for variable rate vectorquantizer design, for optimal bit allocation. It is shown that if each source hasa convex quantizer function then the complexity of the algorithm is low.Key Words: Bit allocation, vector quantization, tree coding1 IntroductionIn bit allocation, a given number of bits

  14. A Primal-Dual Algorithmic Framework for Constrained Convex ...

    E-print Network

    2014-06-20

    Jun 20, 2014 ... f via reformulations, such as lifting, as in the interior point method using ... lower bounds on the objective (or its gradient), and aid the theoretical design ...... for convex models and applications to image restoration, registration.

  15. BROADBAND SENSOR LOCATION SELECTION USING CONVEX OPTIMIZATION IN VERY LARGE SCALE ARRAYS

    E-print Network

    Balan, Radu V.

    BROADBAND SENSOR LOCATION SELECTION USING CONVEX OPTIMIZATION IN VERY LARGE SCALE ARRAYS Yenming M pattern design, sensor location selection, very large scale arrays, convex op- timization, simulated annealing 1. INTRODUCTION Consider a large scale sensor array having N sensors that monitors a surveillance

  16. Optimal transportation in for a distance cost with a convex constraint

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Jiang, Feida; Yang, Xiao-Ping

    2015-06-01

    We prove existence of an optimal transportation map for the Monge-Kantorovich's problem associated with a cost function c( x, y) with a convex constraint in . The cost function coincides with the Euclidean distance | x - y| if the displacement y - x belongs to a given closed convex set C with at most countable flat parts and it is infinite otherwise.

  17. A Perspective-Based Convex Relaxation for Switched-Affine Optimal Control

    E-print Network

    -integer convex program (MICP), based on perspective functions. Relaxing the integer constraints of this MICP-known MICP reduction (via conversion to a mixed logical dynamical system); our numerical study indicates-integer convex program (MICP), lower bounds on the optimal value can be obtained by relaxing the integer

  18. Ant Algorithms for Discrete Optimization

    E-print Network

    Ducatelle, Frederick

    Ant Algorithms for Discrete Optimization Marco Dorigo and Gianni Di Caro IRIDIA, Universit#19;e, Switzerland luca@idsia.ch Abstract This paper overviews recent work on ant algorithms, that is, algorithms for discrete optimization which took inspiration from the observation of ant colonies foraging behavior

  19. Ant Algorithms for Discrete Optimization

    E-print Network

    Gambardella, Luca Maria

    Ant Algorithms for Discrete Optimization Marco Dorigo and Gianni Di Caro IRIDIA, Universit´e Libre, Switzerland luca@idsia.ch Abstract This paper overviews recent work on ant algorithms, that is, algorithms for discrete optimization which took inspiration from the observation of ant colonies foraging behavior

  20. Dynamic Planar Convex Hull with Optimal Query Time and O(log n log log n) Update Time

    E-print Network

    Riko Jacob

    Dynamic Planar Convex Hull with Optimal Query Time and O(log n #1; log log n) Update Time Gerth St fgerth,rjacobg@brics.dk Abstract. The dynamic maintenance of the convex hull of a set of points(log n #1; log log n) time, and various queries about the convex hull in optimal O(log n) worst-case time

  1. Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

    Microsoft Academic Search

    Chih-hang Wu; Jose A. Ventura; Sharon Browning

    1998-01-01

    This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve

  2. Fast Laser Cutting Optimization Algorithm

    Microsoft Academic Search

    B. Adelmann; R. Hellmann

    2011-01-01

    To obtain high quality results in laser fusion cutting, generally, a time and cost intensive optimization process has to be run. We report on a fast algorithm to optimize the laser parameters to get a burr free laser cut. The algorithm includes design of experiments and one-factor-at-a-time methods. The algorithm describes the whole optimization from the first to the optimum

  3. Stochastic methods for large-scale linear problems, variational inequalities, and convex optimization

    E-print Network

    Wang, Mengdi

    2013-01-01

    This thesis considers stochastic methods for large-scale linear systems, variational inequalities, and convex optimization problems. I focus on special structures that lend themselves to sampling, such as when the ...

  4. Two Algorithms for Determining Volumes of Convex Polyhedra

    Microsoft Academic Search

    Jacques Cohen; Timothy J. Hickey

    1979-01-01

    Determining volumes of convex n-dimensional polyhedra defined by a linear system of inequalities is useful in program analysis Two methods for computing these volumes are proposed (1) summing the volumes of stmphces which form the polyhedron, and (2) summing the volumes of (increasingly smaller) paralleleplpeds which can be fit into the polyhedron Assuming that roundoff errors are small, the first

  5. A Comparison of Two Fast Algorithms for Computing the Distance between Convex Polyhedra

    Microsoft Academic Search

    Stephen Cameron

    1996-01-01

    The problem of tracking the distance betweentwo convex polyhedra is finding applications in many areasof robotics. The algorithm of Lin and Canny is a well-knownfast solution to this problem, but by recasting the algorithmsinto configuration space we show that a minor modificationto the earlier algorithm of Gilbert, Johnson and Keerthi alsogives this algorithm the same expected cost.I. IntroductionMethods for computing

  6. Analysis of Backtrack Algorithms for Listing All Vertices and All Faces of a Convex Polyhedron

    Microsoft Academic Search

    Komei Fukuda; Thomas M. Liebling; François Margot

    1997-01-01

    In this paper, we investigate the applicability of backtrack technique to solve the vertex enu- meration problem and the face enumeration problem for a convex polyhedron given by a system of linear inequalities. We show that there is a linear-time backtrack algorithm for the face enumera- tion problem whose space complexity is polynomial in the input size, but the vertex

  7. Non-euclidean restricted memory level method for large-scale convex optimization

    Microsoft Academic Search

    Aharon Ben-tal; Arkadi Nemirovski

    2005-01-01

    .  We propose a new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over simple domains. The characteristic features of the method are (a) the possibility to adjust the scheme to the geometry of the feasible set, thus allowing to get (nearly) dimension-independent (and nearly optimal in the large-scale case) rate-of-convergence results for minimization of a convex Lipschitz continuous function

  8. Swarm Optimization Algorithms Incorporating Design Sensitivities

    Microsoft Academic Search

    Kazuhiro Izui; Shinji Nishiwaki; Masataka Yoshimura

    1. Abstract Swarm algorithms such as Particle Swarm Optimization (PSO) are non-gradient probabilistic optimization algorithms that have been successfully applied to obtain global optimal solutions for complex problems such as multi-peak problems. However these algorithms have not been applied to complicated structural and mechanical optimization problems since local optimization capability is still inferior to general numerical optimization methods. This paper

  9. Fairness in optimal routing algorithms 

    E-print Network

    Goos, Jeffrey Alan

    1988-01-01

    . Tsei Dr. Pierce E. Cantrell A study of fairness in multiple path optimal routing algorithms is discussed. Fair- ness measures are developed to evaluate multiple path routing in virtual circuit and datagram implementations. Several objective...

  10. Analog circuit optimization using evolutionary algorithms and convex optimization

    E-print Network

    Aggarwal, Varun

    2007-01-01

    In this thesis, we analyze state-of-art techniques for analog circuit sizing and compare them on various metrics. We ascertain that a methodology which improves the accuracy of sizing without increasing the run time or the ...

  11. Ant Algorithms for Discrete Optimization

    E-print Network

    Hutter, Frank

    Ant Algorithms for Discrete Optimization Marco Dorigo Gianni Di Caro IRIDIA CP 194/6 Universit@iridia.ulb.ac.be Luca M. Gambardella IDSIA Corso Elvezia 36 CH-6900 Lugano Switzerland luca@idsia.ch Keywords ant algorithms, ant colony optimiza- tion, swarm intelligence, metaheuris- tics, natural computation Abstract

  12. Evolutionary Algorithms and Matroid Optimization Joachim Reichel #

    E-print Network

    Evolutionary Algorithms and Matroid Optimization Problems Joachim Reichel # Department the performance of evolutionary algorithms on various matroid optimization problems that encompass a vast number, Performance Keywords evolutionary algorithms, matroids, minimum weight basis, matroid intersection, randomized

  13. An Effective Branch-and-Bound Algorithm for Convex Quadratic ...

    E-print Network

    Christoph Buchheim,,,

    singular, so that E(Q ,x ) may be a degenerate ellipsoid. Moreover, for ? ? R+, ...... vectors, we use the Block Korkin-Zolotarev basis reduction algorithm [16] implemented in NTL [17]. ..... by ?? modulation: the case of circulant quadratic forms.

  14. A scalable projective scaling algorithm for l(p) loss with convex penalizations.

    PubMed

    Zhou, Hongbo; Cheng, Qiang

    2015-02-01

    This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems. PMID:25608289

  15. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  16. A Study of Near-Field Direct Antenna Modulation Systems Using Convex Optimization

    E-print Network

    Hajimiri, Ali

    de- sign for a class of communication systems known as near-field direct antenna modulation (NFDAMA Study of Near-Field Direct Antenna Modulation Systems Using Convex Optimization Javad Lavaei systems, referred to as near-field direct antenna modulation (NFDAM) systems. The objective is to propose a s

  17. Non-Euclidean Restricted Memory Level Method for Large-Scale Convex Optimization

    E-print Network

    Nemirovski, Arkadi

    Non-Euclidean Restricted Memory Level Method for Large-Scale Convex Optimization Aharon Ben Lipschitz continuous function over a Euclidean ball, a standard simplex, and a spectahedron (the set. In this paper we propose a new subgradient-type method ­ Non-Euclidean restricted Memory Level (NERML

  18. Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization

    E-print Network

    Williams, Brian C.

    Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization Lars Rovers [2]. The 2009 Mars Science Laboratory mission aims to achieve a landing ellipse of around 10 km [3 to Mars and to enable sample return missions, the accuracy with which a lander can be delivered

  19. A Convex Optimization Approach to Feedback Scheduling Mongi Ben Gaid, Daniel Simon and Olivier Sename

    E-print Network

    Paris-Sud XI, Université de

    A Convex Optimization Approach to Feedback Scheduling Mongi Ben Gaid, Daniel Simon and Olivier-mails: {Mohamed.Bengaid,Daniel.Simon}@inrialpes.fr). O. Sename is with the Control Systems Department, Gipsa. Simon are with the NeCS Project-Team, INRIA Rh^one-Alpes, Montbonnot Saint Martin, France (e

  20. Single-Ballot Risk-Limiting Audits Using Convex Optimization Stephen Checkoway

    E-print Network

    Zhou, Yuanyuan

    Single-Ballot Risk-Limiting Audits Using Convex Optimization Stephen Checkoway UC San Diego Anand. · First, audits must be risk-limiting [22]. If the audit certifies the outcome reported in the initial enough ev- idence that the reported outcome is incorrect when it actually is. An audit is risk

  1. A convex optimization approach to ARMA Tryphon T. Georgiou and Anders Lindquist

    E-print Network

    Georgiou, Tryphon T.

    1 A convex optimization approach to ARMA modeling Tryphon T. Georgiou and Anders Lindquist Abstract systems. Yet many problems concerning mod- eling of linear systems remain open. A case in point is ARMA (autoregressive moving average) modeling of time-series. In ARMA modeling, least-squares tech- niques often lead

  2. Regularization Constants in LS-SVMs: a Fast Estimate via Convex Optimization

    E-print Network

    Regularization Constants in LS-SVMs: a Fast Estimate via Convex Optimization Kristiaan Pelckmans Support Vector Machines (LS-SVMs) for regression and classification is considered. The formulation of the LS-SVM training and regularization constant tuning problem (w.r.t. the validation performance

  3. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  4. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  5. A modified particle swarm optimization algorithm

    Microsoft Academic Search

    Junjun Li; Xihuai Wang

    2004-01-01

    A modified particle swarm optimization (PSO) algorithms is proposed. This method integrates the particle swarm optimization with the simulated annealing algorithm. It can solve the problem of local minimum of the particle swarm optimization, and narrow the field of search continually, so it has higher efficiency of search. This algorithm is applied to the function optimization problem and simulation shows

  6. EMPIRICAL ANALYSIS OF OPTIMIZATION ALGORITHMS

    E-print Network

    Magdon-Ismail, Malik

    EMPIRICAL ANALYSIS OF OPTIMIZATION ALGORITHMS FOR PORTFOLIO ALLOCATION By Andrew Bolin A Thesis. INTRODUCTION AND HISTORICAL REVIEW . . . . . . . . . . . . . . 1 1.1 Modern Portfolio Theory . . . . . . 6 2.1.5 Minimizing MDD Subject to a Return Constraint (Min-MDD) 6 2.2 Clustering

  7. Genetic algorithm optimization of entanglement

    SciTech Connect

    Navarro-Mun'oz, Jorge C.; Rosu, H. C.; Lopez-Sandoval, R. [Potosinian Institute of Science and Technology, Apartado Postal 3-74 Tangamanga, 78231 San Luis Potosi (Mexico)

    2006-11-15

    We present an application of the genetic algorithmic computational method to the optimization of the concurrence measure of entanglement for the cases of one dimensional chains, as well as square and triangular lattices in a simple tight-binding approach in which the hopping of electrons is much stronger than the phonon dissipation.

  8. Genetic algorithm optimization of entanglement

    E-print Network

    Jorge C. Navarro-Munoz; H. C. Rosu; R. Lopez-Sandoval

    2006-11-13

    We present an application of a genetic algorithmic computational method to the optimization of the concurrence measure of entanglement for the cases of one dimensional chains, as well as square and triangular lattices in a simple tight-binding approach in which the hopping of electrons is much stronger than the phonon dissipation

  9. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  10. Optimal seismic deconvolution: distributed algorithms

    Microsoft Academic Search

    Konstantinos N. Plataniotis; Sokratis K. Katsikas; Demetrios G. Lainiotis; Anastasios N. Venetsanopoulos

    1998-01-01

    Deconvolution is one of the most important aspects of seismic signal processing. The objective of the deconvolution procedure is to remove the obscuring effect of the wavelet's replica making up the seismic trace and therefore obtain an estimate of the reflection coefficient sequence. This paper introduces a new deconvolution algorithm. Optimal distributed estimators and smoothers are utilized in the proposed

  11. EE 231 Convex Optimization in Engineering Applications (Winter 2014)

    E-print Network

    Mohsenian-Rad, Hamed

    theory. Gradient, Steepest Descent, and Newton's Methods. Applications in engineering. Textbook: S. Boyd [Chapters 9011]: Gradient Descent Method Steepest Descent Method Newton's Method for Unconstrained Optimization Newton's Method for Equality Constrained Optimization Prerequisites: EE 230: Mathematical Methods

  12. Firefly Algorithm, Lévy Flights and Global Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She

    Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Lévy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.

  13. Firefly Algorithm, Levy Flights and Global Optimization

    E-print Network

    Yang, Xin-She

    2010-01-01

    Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Levy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Levy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.

  14. Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.

    PubMed

    Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe

    2015-01-15

    Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. PMID:25462697

  15. Parallel Selective Algorithms for Nonconvex Big Data Optimization

    NASA Astrophysics Data System (ADS)

    Facchinei, Francisco; Scutari, Gesualdo; Sagratella, Simone

    2015-04-01

    We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.

  16. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration

    NASA Astrophysics Data System (ADS)

    Chen, Peijun; Huang, Jianguo; Zhang, Xiaoqun

    2013-02-01

    Recently, the minimization of a sum of two convex functions has received considerable interest in a variational image restoration model. In this paper, we propose a general algorithmic framework for solving a separable convex minimization problem from the point of view of fixed point algorithms based on proximity operators (Moreau 1962 C. R. Acad. Sci., Paris I 255 2897-99). Motivated by proximal forward-backward splitting proposed in Combettes and Wajs (2005 Multiscale Model. Simul. 4 1168-200) and fixed point algorithms based on the proximity operator (FP2O) for image denoising (Micchelli et al 2011 Inverse Problems 27 45009-38), we design a primal-dual fixed point algorithm based on the proximity operator (PDFP2O? for ? ? [0, 1)) and obtain a scheme with a closed-form solution for each iteration. Using the firmly nonexpansive properties of the proximity operator and with the help of a special norm over a product space, we achieve the convergence of the proposed PDFP2O? algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods. Finally, we illustrate the efficiency of PDFP2O? through some numerical examples on image supper-resolution, computerized tomographic reconstruction and parallel magnetic resonance imaging. Generally speaking, our method PDFP2O (? = 0) is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications.

  17. Convexity-Oriented Mapping Method for the Topology Optimization of Electromagnetic Devices Composed of Iron and Coils

    Microsoft Academic Search

    Thibaut Labbe; Bruno Dehez

    2010-01-01

    In order to perform parameter or shape optimizations, an initial topology is required which affects the final solution. This constraint is released in topology optimization methods. They are based on a splitting of the design space into cells, in which they attempt to distribute optimally predefined materials. In topology optimization, a lack of convexity has already been observed by several

  18. Experimental Validation of a Numerical Controller Using Convex Optimization with Linear Matrix Inequalities on a Quarter-Car Suspension System 

    E-print Network

    Chintala, Rohit

    2012-10-19

    Numerical methods of designing control systems are currently an active area of research. Convex optimization with linear matrix inequalities (LMIs) is one such method. Control objectives like minimizing the H_2, H_infinity norms, limiting...

  19. A Recursive Algorithm for Minimizing Non-Smooth Convex Function over n-Dimensional Simplex

    NASA Astrophysics Data System (ADS)

    Morozova, E. Y.

    2008-09-01

    A new method for solving constrained minimization problem when the feasible region is an n-dimensional simplex is presented. This method generalizes a one-dimensional bisection method for the case n>1 using a recursive procedure. The convergence of the method for the class of strictly convex functions is proved. The method does not require differentiability of function, and is guaranteed to converge to the minimizer of non-smooth functions. The computational results demonstrating the effectiveness of algorithm for minimizing non-smooth functions are presented.

  20. A Weiszfeld-like algorithm for a Weber location problem constrained to a closed and convex set

    E-print Network

    Torres, Germán A

    2012-01-01

    The Weber problem consists of finding a point in $\\mathbbm{R}^n$ that minimizes the weighted sum of distances from $m$ points in $\\mathbbm{R}^n$ that are not collinear. An application that motivated this problem is the optimal location of facilities in the 2-dimensional case. A classical method to solve the Weber problem, proposed by Weiszfeld in 1937, is based on a fixed point iteration. In this work a Weber problem constrained to a closed and convex set is considered. A Weiszfeld-like algorithm, well defined even when an iterate is a vertex, is presented. The iteration function $Q$ that defines the proposed algorithm, is based mainly on an orthogonal projection over the feasible set, combined with the iteration function of the modified Weiszfeld algorithm presented by Vardi and Zhang in 2001. It can be proved that $x^*$ is a fixed point of the iteration function $Q$ if and only if $x^*$ is the solution of the constrained Weber problem. Besides that, under certain hypotheses, $x^*$ satisfies the KKT optimali...

  1. Mixed H2\\/H? control for discrete-time systems via convex optimization

    Microsoft Academic Search

    Isaac Kaminer; Pramod P. Khargonekar; Mario A. Rotea

    1992-01-01

    A mixed H2\\/H? control problem for discrete-time systems is considered for both state-feedback and output-feedback cases. It is shown that these problems can be effectively solved by reducing them to convex programming problems. In the state-feedback case, nearly optimal controllers can be chosen to be static gains, while in the output-feedback case the controller dimension does not exceed the plant

  2. Computing Optimized Representations for Non-convex Polyhedra by Detection and Removal of Redundant Linear Constraints

    Microsoft Academic Search

    Christoph Scholl; Stefan Disch; Florian Pigorsch; Stefan Kupferschmid

    2009-01-01

    We present a method which computes optimized representations for non-convex polyhedra. Our method detects so-called redundant\\u000a linear constraints in these representations by using an incremental SMT (Satisfiability Modulo Theories) solver and then removes\\u000a the redundant constraints based on Craig interpolation. The approach is motivated by applications in the context of model\\u000a checking for Linear Hybrid Automata. Basically, it can be

  3. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  4. Worst-Case Violation of Sampled Convex Programs for Optimization ...

    E-print Network

    2008-12-23

    Uncertain programs have been developed to deal with optimization problems in- cluding inexact data, i.e., ..... Even in such a situation, we can derive a meaningful upper bound ..... The above inequality is proved by using the Binomial formula. See Theorem 3.1 ...... respectively. Here, K2 denotes a cone with a vertex at ¯u.

  5. Evolutionary optimization algorithm by entropic sampling

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Yong; Han, Seung Kee

    1998-03-01

    A combinatorial optimization algorithm, genetic-entropic algorithm, is proposed. This optimization algorithm is based on the genetic algorithms and the natural selection via entropic sampling. With the entropic sampling, this algorithm helps to escape local optima in the complex optimization problems. To test the performance of the algorithm, we adopt the NK model (N is the number of bits in the string and K is the degree of epistasis) and compare the performances of the proposed algorithm with those of the canonical genetic algorithm. It is found that the higher the K value, the better this algorithm can escape local optima and search near global optimum. The characteristics of this algorithm in terms of the power spectrum analysis together with the difference between two algorithms are discussed.

  6. Comparative Study of Derivative Free Optimization Algorithms

    Microsoft Academic Search

    Nam Pham; A. Malinowski; T. Bartczak

    2011-01-01

    Derivative free optimization algorithms are often used when it is difficult to find function derivatives, or if finding such derivatives are time consuming. The Nelder Mead's simplex method is one of the most popular derivative free optimization algorithms in the fields of engineering, statistics, and sciences. This algorithm is favored and widely used because of its fast con- vergence and

  7. Newton-Raphson consensus for distributed convex optimization

    E-print Network

    Schenato, Luca

    of Padova April 28th, 2011 schenato@dei.unipd.it (DEI - UniPD) Distrib. Newton-Raphson optimization April 28 where neighbors cooperate to find the optimum f1 f2 f3 f4 f5 f6 f7 schenato@dei.unipd.it (DEI - Uni) 0.2 0.4 0.6 0.8 2 4 6 8 -3-2-1 0 1 2 3 0 1 2 3 4 schenato@dei.unipd.it (DEI - UniPD) Distrib. Newton

  8. Global optimization algorithms for a CAD workstation

    Microsoft Academic Search

    W. L. Price

    1987-01-01

    This paper describes two new versions of the controlled random search procedure for global optimization (CRS). Designed primarily to suit the user of a CAD workstation, these algorithms can also be used effectively in other contexts. The first, known as CRS3, speeds the final convergence of the optimization by combining a local optimization algorithm with the global search procedure. The

  9. A novel bee swarm optimization algorithm for numerical function optimization

    Microsoft Academic Search

    Reza Akbari; Alireza Mohammadi; Koorush Ziarati

    2010-01-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees.

  10. Experimental Comparisons of Derivative Free Optimization Algorithms

    E-print Network

    Auger, Anne; Zerpa, Jorge M Perez; Ros, Raymond; Schoenauer, Marc

    2010-01-01

    In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in particular investigated.

  11. Simulated annealing algorithm for optimal capital growth

    NASA Astrophysics Data System (ADS)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  12. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T. [Department of Computer Science, Graduate Center, City University of New York, New York, New York 10016 (United States); Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Department of Computer Science, Graduate Center, City University of New York, New York, New York 10016 (United States)

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  13. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    PubMed Central

    Chen, Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK’s interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization. PMID:20964213

  14. SOME RECENT DEVELOPMENTS IN NONLINEAR OPTIMIZATION ALGORITHMS

    Microsoft Academic Search

    A. Sartenaer

    2003-01-01

    This article provides a condensed overview of some of the major today's features (both classical or recently developed), used in the design and development of algorithms to solve nonlinear continuous optimization problems. We rst consider the unconstrained optimization case to introduce the line-search and trust-region approaches as globalization techniques to force an algorithm to converge from any starting point. We

  15. Optimal RF design using smart evolutionary algorithms

    Microsoft Academic Search

    Peter J. Vancorenland; Carl De Ranter; Michiel Steyaert; Georges G. E. Gielen

    2000-01-01

    This paper presents an optimization algorithm that is able to significantly increase the speed of RF circuit optimizations. The algorithm consists of a series of consecutive evolutionary opti- mizations of the circuit itself and of a modeled version thereof. The speed increase arises from the difference in evaluation time between the real simulation and the fit evaluation. As circuit ap-

  16. Stochastic Search for Signal Processing Algorithm Optimization

    E-print Network

    Stochastic Search for Signal Processing Algorithm Optimization Bryan Singer Manuela Veloso May address the complex task of signal processing optimization. We first introduce and discuss the complexities of this domain. In general, a single signal processing algorithm can be represented by a very

  17. An improved particle swarm optimization algorithm

    Microsoft Academic Search

    Yan Jiang; Tiesong Hu; Chongchao Huang; Xianing Wu

    2007-01-01

    An improved particle swarm optimization (IPSO) is proposed in this paper. In the new algorithm, a population of points sampled randomly from the feasible space. Then the population is partitioned into several sub-swarms, each of which is made to evolve based on particle swarm optimization (PSO) algorithm. At periodic stages in the evolution, the entire population is shuffled, and then

  18. A modification to particle swarm optimization algorithm

    Microsoft Academic Search

    Huiyuan Fan

    2002-01-01

    In this paper, a modification strategy is proposed for the particle swarm optimization (PSO) algorithm. The strategy adds an adaptive scaling term into the algorithm, which aims to increase its convergence rate and thereby to obtain an acceptable solution with a lower number of objective function evaluations. Such an improvement can be useful in many practical engineering optimizations where the

  19. An adaptive simple particle swarm optimization algorithm

    Microsoft Academic Search

    Fan Chunxia; Wan Youhong

    2008-01-01

    The particle swarm optimization algorithm with constriction factor (CFPSO) has some demerits, such as relapsing into local extremum, slow convergence velocity and low convergence precision in the late evolutionary. An adaptive simple particle swarm optimization with constriction factor (AsCFPSO) is combined with chaotic optimization, then a new CFPSO is developed, i.e., a chaotic optimization-based adaptive simple particle swarm optimization equation

  20. A modified particle swarm optimization algorithm

    Microsoft Academic Search

    Qian-Li Zhang; Xing Li; Quang-Ahn Tran

    2005-01-01

    A modified particle swarm optimization (PSO) algorithm is proposed in this paper to avoid premature convergence with the introduction of mutation operation. The performance of this algorithm is compared to the standard PSO algorithm and experiments indicate that it has better performance with little overhead.

  1. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  2. 618 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS--PART B: CYBERNETICS, VOL. 28, NO. 4, AUGUST 1998 Efficient Convex-Elastic Net Algorithm to Solve

    E-print Network

    Almulhem, Ahmad

    1998 Efficient Convex-Elastic Net Algorithm to Solve the Euclidean Traveling Salesman Problem Muhammed an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean Al-Mulhem and Tareq Al-Maghrabi Abstract--This paper describes a hybrid algorithm that combines

  3. An Algorithm for Optimal PLA Folding

    Microsoft Academic Search

    Gary D. Hachtel; A. Richard Newton; Alberto L. Sangiovanni-vincentelli

    1982-01-01

    In this paper we present a graph-theoretic formulation of the optimal PLA folding problem. The class of admissible PLA foldings is defined. Necessary and sufficient conditions for obtaining the optimal folding are given. A subproblem of the optimal problem is shown to be NP-complete, and a heuristic algorithm is given which has proven to be effective on a number of

  4. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  5. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  6. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  7. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  8. An efficient algorithm for decomposing a planar polygon into star-convex components and for identifying the kernel of each component

    E-print Network

    Alford, Jennifer Reynolds

    1987-01-01

    ) time, where N is the number of reflex vertices of an n-sided polygon. TABLE OF CONTENTS Page TABLE OF CONTENTS . LIST OF FIGURES. INTRODUCIION. CURRENT STATE OF THE PROBLEM. Algorithms for star-convex decompositions . . . . . . Algorithms... with the interior of the polygon on the left of each edge. A vertex, vt, is said to be reflex if the intersection of et and e;+t is greater than 180' with respect to the interior of the polygon, otherwise it is said to be convex. A convex polygon is one...

  9. A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

    E-print Network

    Kojima, Masakazu

    -and-project procedure for 0-1 IPs by Balas-Ceria-Cornu´ejols'93. (d) SCRM (Successive Convex Relaxation Method) for QOPs-1 IPs by Balas-Ceria-Cornu´ejols'93. (d) SCRM (Successive Convex Relaxation Method) for QOPs by Kojima-1 IPs by Balas-Ceria-Cornu´ejols'93. (d) SCRM (Successive Convex Relaxation Method) for QOPs by Kojima

  10. Approximate algorithms for Space Station Maneuver Optimization 

    E-print Network

    Mur-Dongil, Andres

    1998-01-01

    APPROXIMATE ALGORITHMS FOR SPACE STATION MANEUVER OPTIMIZATION A Thesis by ANDRE S MUR-DONGIL Submitted to the OAice of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... August 1998 Major Subject: Aerospace Engineering APPROXIMATE ALGORITHMS FOR SPACE STATION MANEUVER OPTIMIZATION A Thesis by ANDRES MUR-DONGIL Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER...

  11. Algorithms for optimizing hydropower system operation

    Microsoft Academic Search

    Jan C. Grygier; Jery R. Stedinger

    1985-01-01

    Successive liner programming, an optimal control algorithm, and a combination of linear programming and dynamic programming (LP-DP) are employed to optimize the operation of multireservoir hydrosystems given a deterministic inflow forecast. The algorithm maximize the value of energy produced at on-peak rates, plus the estimated value of water remaining in storage at the end of the 12-month planning period. The

  12. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  13. Dynamic Resource Allocation in Cognitive Radio Networks: A Convex Optimization Perspective

    E-print Network

    Zhang, Rui; Cui, Shuguang

    2010-01-01

    This article provides an overview of the state-of-art results on communication resource allocation over space, time, and frequency for emerging cognitive radio (CR) wireless networks. Focusing on the interference-power/interference-temperature (IT) constraint approach for CRs to protect primary radio transmissions, many new and challenging problems regarding the design of CR systems are formulated, and some of the corresponding solutions are shown to be obtainable by restructuring some classic results known for traditional (non-CR) wireless networks. It is demonstrated that convex optimization plays an essential role in solving these problems, in a both rigorous and efficient way. Promising research directions on interference management for CR and other related multiuser communication systems are discussed.

  14. Ant Algorithms Solve Difficult Optimization Problems

    E-print Network

    Libre de Bruxelles, Université

    Ant Algorithms Solve Difficult Optimization Problems Marco Dorigo IRIDIA Universit´e Libre de Bruxelles 50 Avenue F. Roosevelt B-1050 Brussels, Belgium mdorigo@ulb.ac.be Abstract. The ant algorithms research field builds on the idea that the study of the behavior of ant colonies or other social insects

  15. Finding Tradeoffs by Using Multiobjective Optimization Algorithms

    Microsoft Academic Search

    Shigeru Obayashi; Daisuke Sasaki; Akira Oyama

    2005-01-01

    The objective of the present study is to demonstrate performances of Evolutionary Algorithms (EAs) and conventional gradient-based methods for finding Pareto fronts. The multiobjective optimization algorithms are applied to analytical test problems as well as to the real-world problems of a compressor design. The comparison results clearly indicate the superiority of EAs in finding tradeoffs.

  16. Reservoir Operation by Ant Colony Optimization Algorithms

    Microsoft Academic Search

    M. R. Jalali

    2006-01-01

    In this paper, ant colony optimization (ACO) algorithms are proposed for reservoir operation. Through a collection of cooperative agents called ants, the near optimum solution to the reservoir operation can be effectively achieved. To apply ACO algorithms, the problem is approached by considering a finite horizon with a time series of inflow, classifying the reservoir volume to several intervals, and

  17. Statistical analysis of optimization algorithms with R

    Microsoft Academic Search

    Thomas Bartz-Beielstein; Mike Preuß; Martin Zaefferer

    2012-01-01

    Based on experiences from several (rather theoretical) tutorials and workshops devoted to the experimental analysis of algorithms at the world's leading conferences in the field of Computational Intelligence, a practical, hands-on tutorial for the statistical analysis of optimization algorithms is presented. This tutorial -demonstrates how to analyze results from real experimental studies, e.g., experimental studies in EC -item gives a

  18. An Improved Algorithm for Hydropower Optimization

    Microsoft Academic Search

    K. K. Reznicek; S. P. Simonovic

    1990-01-01

    A new algorithm named energy management by successive linear programming (EMSLP) was developed to solve the optimization problem of the hydropower system operation. The EMSLP algorithm has two iteration levels: at the first level a stable solution is sought, and at the second the interior of the feasible region is searched to improve the objective function whenever its value decreases.

  19. A Modified Particle Swarm Optimizer Algorithm

    Microsoft Academic Search

    Yang Guangyou

    2007-01-01

    This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity

  20. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  1. An Emotional Particle Swarm Optimization Algorithm

    Microsoft Academic Search

    Yang Ge; Zhang Rubo

    2005-01-01

    \\u000a This paper presents a modification of the particle swarm optimization algorithm (PSO) intended to introduce some psychology\\u000a factor of emotion into the algorithm. In the new algorithm, which is based on a simple perception and emotion psychology model,\\u000a each particle has its own feeling and reaction to the current position, and it also has specified emotional factor towards\\u000a the sense

  2. Randomized Parallel Algorithms in Optimization Stephen Wright

    E-print Network

    central memory, evalute g := fik (x); 3 for nonzero components gv do xv xv - gv ; Wright (UWRandomized Parallel Algorithms in Optimization Stephen Wright University of Wisconsin-Madison July 2013 Wright (UW-Madison) Random Parallel Optimization July 2013 1 / 52 #12;Collaborators @ UW

  3. Graph algorithms for clock schedule optimization

    Microsoft Academic Search

    Narendra V. Shenoy; Robert K. Brayton; Alberto L. Sangiovanni-Vincentelli

    1992-01-01

    Performance driven synthesis of sequential circuits relies on techniquessuch as optimal clocking, retiming and resynthesis. In this paper we address the optimal clockingproblem and demonstrate that it is reducible to a parametric shortest path problem. We use constraints that take into account both the short and long paths. The main contributions are efjicient graph algorithms to solve the set of

  4. A data locality optimizing algorithm

    Microsoft Academic Search

    Monica S. Lam

    1991-01-01

    This paper proposes an algorithm that improves the locality of a loop nest by transforming the code via interchange,reversal, skewing and tiling. The loop transformation rrlgorithm is based on two concepts: a mathematical formulation of reuse and locality, and a loop transformation theory that unifies the various transforms as unimodular matrix tmnsfonnations.The algorithm haa been implemented in the SUIF (Stanford

  5. A data locality optimizing algorithm

    Microsoft Academic Search

    Michael E. Wolf; Monica S. Lam

    1991-01-01

    This paper proposes an algorithm that improves the locality of a loop nest by transforming the code via interchange, reversal, skewing and tiling. The loop transformation rrlgorithm is based on two concepts: a mathematical formulation of reuse and locality, and a loop transformation theory that unifies the various transforms as unimodular matrix tmnsfonnations. The algorithm haa been implemented in the

  6. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  7. FILTER-BANK OPTIMIZATION WITH CONVEX OBJECTIVES, AND THE OPTIMALITY OF PRINCIPAL COMPONENT FORMS1

    Microsoft Academic Search

    Sony Akkarakaran; P. P. Vaidyanathan

    This paper proposes a general framework for the optimization of orthonormal filter banks (FB's) for given input statistics. This includes as special cases, many recent results on filter bank optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB

  8. Filterbank optimization with convex objectives and the optimality of principal component forms

    Microsoft Academic Search

    Sony Akkarakaran; P. P. Vaidyanathan

    2001-01-01

    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been

  9. Two stochastic optimization algorithms applied to nuclear reactor core design

    Microsoft Academic Search

    Wagner F. Sacco; Cassiano R. E. de oliveira; Cláudio M. N. A. Pereira

    2006-01-01

    Two stochastic optimization algorithms conceptually similar to Simulated Annealing are presented and applied to a core design optimization problem previously solved with Genetic Algorithms. The two algorithms are the novel Particle Collision Algorithm (PCA), which is introduced in detail, and Dueck's Great Deluge Algorithm (GDA). The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and

  10. Optimal Hops-Based Adaptive Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  11. A review of the book "Functional analysis and applied optimization in Banach spaces -Applications to non-convex variational problems" by Fabio Botelho,

    E-print Network

    Henrion, Didier

    A review of the book "Functional analysis and applied optimization in Banach spaces - Applications to non-convex variational problems" by Fabio Botelho, Springer, Cham, Switzerland, 2014. The book extensively in the landmark book [I. Ekeland, R. Temam. Convex analysis and variational problems. Elsevier

  12. An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.

    PubMed

    Bechikh, Slim; Chaabani, Abir; Said, Lamjed Ben

    2014-10-30

    Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; therebymaking our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a wellconverged and well-diversified approximation of the Pareto front. PMID:25373137

  13. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States)] [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China)] [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China)] [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China)] [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China)] [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States) [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)] [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.

  14. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  15. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  16. Algorithm selection in structural optimization

    E-print Network

    Clune, Rory P. (Rory Patrick)

    2013-01-01

    Structural optimization is largely unused as a practical design tool, despite an extensive academic literature which demonstrates its potential to dramatically improve design processes and outcomes. Many factors inhibit ...

  17. Source optimization using particle swarm optimization algorithm in photolithography

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Li, Sikun; Wang, Xiangzhao; Yan, Guanyong; Yang, Chaoxing

    2015-03-01

    In recent years, with the availability of freeform sources, source optimization has emerged as one of the key techniques for achieving higher resolution without increasing the complexity of mask design. In this paper, an efficient source optimization approach using particle swarm optimization algorithm is proposed. The sources are represented by pixels and encoded into particles. The pattern fidelity is adopted as the fitness function to evaluate these particles. The source optimization approach is implemented by updating the velocities and positions of these particles. The approach is demonstrated by using two typical mask patterns, including a periodic array of contact holes and a vertical line/space design. The pattern errors are reduced by 66.1% and 39.3% respectively. Compared with the source optimization approach using genetic algorithm, the proposed approach leads to faster convergence while improving the image quality at the same time. The robustness of the proposed approach to initial sources is also verified.

  18. Optimal parallel quantum query algorithms

    E-print Network

    Stacey Jeffery; Frederic Magniez; Ronald de Wolf

    2015-02-20

    We study the complexity of quantum query algorithms that make p queries in parallel in each timestep. This model is in part motivated by the fact that decoherence times of qubits are typically small, so it makes sense to parallelize quantum algorithms as much as possible. We show tight bounds for a number of problems, specifically Theta((n/p)^{2/3}) p-parallel queries for element distinctness and Theta((n/p)^{k/(k+1)} for k-sum. Our upper bounds are obtained by parallelized quantum walk algorithms, and our lower bounds are based on a relatively small modification of the adversary lower bound method, combined with recent results of Belovs et al. on learning graphs. We also prove some general bounds, in particular that quantum and classical p-parallel complexity are polynomially related for all total functions f when p is small compared to f's block sensitivity.

  19. An Optimal Drum Scheduling Algorithm

    Microsoft Academic Search

    SAMUEL H. FULLER

    1972-01-01

    Suppose a set of N records must be read or written from a drum, fixed-head disk, or similar storage unit of a computer system. The records vary in length and are arbitrarily located on the surface of the drum. The problem considered here is to find an algorithm that schedules the processing of these records with the minimal total amount

  20. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.

  1. A Binary Particle Swarm Optimization Algorithm for Lot Sizing Problem

    Microsoft Academic Search

    M. Fatih; Yun-Chia Liang

    This paper presents a binary particle swarm optimization algorithm for the lot sizing problem. The problem is to find order quantities which will minimize the total ordering and holding costs of ordering decisions. Test problems are constructed randomly, and solved optimally by Wagner and Whitin Algorithm. Then a binary particle swarm optimization algorithm and a traditional genetic algorithm are coded

  2. Chaotic Particle Swarm Optimization Algorithm for Traveling Salesman Problem

    Microsoft Academic Search

    Zhenglei Yuan; Liliang Yang; Yaohua Wu; Li Liao; Guoqiang Li

    2007-01-01

    In this paper, a novel algorithm based on particle optimization algorithm (PSO) and chaos optimization algorithm (COA) is presented to solve traveling salesman problem. Some new operators are proposed to overcome the difficulties of implementing PSO into solving the discreet problems. Meanwhile embedded with chaos optimization algorithm (COA) it can enhance particle's global searching ability so as not to converge

  3. A Simulative Bionic Intelligent Optimization Algorithm: Artificial Searching Swarm Algorithm and Its Performance Analysis

    Microsoft Academic Search

    Tanggong Chen

    2009-01-01

    In this paper, a novel optimization algorithm - artificial searching swarm algorithm (ASSA) is presented by analyzing the operating principle and uniform framework of the bionic intelligent optimization algorithm. ASSA simulates the process of solving optimal design problem to the process of searching optimal goal by searching swarm with the set rules, and finds the optimal solution through the search

  4. An Improved Algorithm for Hydropower Optimization

    NASA Astrophysics Data System (ADS)

    Reznicek, K. K.; Simonovic, S. P.

    1990-02-01

    A new algorithm named energy management by successive linear programming (EMSLP) was developed to solve the optimization problem of the hydropower system operation. The EMSLP algorithm has two iteration levels: at the first level a stable solution is sought, and at the second the interior of the feasible region is searched to improve the objective function whenever its value decreases. The EMSLP algorithm has been tested using the Manitoba Hydro system data applied to a single reservoir system. To evaluate the performance of the algorithm the comparison has been made with the results obtained by the energy management and maintenance analysis (EMMA) program used in the Manitoba Hydro practice. The paper describes the EMSLP algorithm and presents the results of the comparison with EMMA.

  5. Multi-phase generalization of the particle swarm optimization algorithm

    Microsoft Academic Search

    Buthainah Al-kazemi; Chilukuri K. Mohan

    2002-01-01

    Multi-phase particle swarm optimization is a new algorithm to be used for discrete and continuous problems. In this algorithm, different groups of particles have trajectories that proceed with differing goals in different phases of the algorithm. On several benchmark problems, the algorithm outperforms standard particle swarm optimization, genetic algorithm, and evolution programming

  6. The particle swarm optimization algorithm in size and shape optimization

    Microsoft Academic Search

    P. C. Fourie; A. A. Groenwold

    2002-01-01

    .   Shape and size optimization problems instructural design are addressed using the particle swarm optimization algorithm (PSOA).\\u000a In our implementation of the PSOA, the social behaviour of birds is mimicked. Individual birds exchange information about\\u000a their position, velocity and fitness, and the behaviour of the flock is then influenced to increase the probability of migration\\u000a to regions of high fitness.

  7. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. PMID:24785823

  8. Automatic Segmentation of Neonatal Images Using Convex Optimization and Coupled Level Sets

    PubMed Central

    Wang, Li; Shi, Feng; Lin, Weili; Gilmore, John H.; Shen, Dinggang

    2011-01-01

    Accurate segmentation of neonatal brain MR images remains challenging mainly due to their poor spatial resolution, inverted contrast between white matter and gray matter, and high intensity inhomogeneity. Most existing methods for neonatal brain segmentation are atlas-based and voxel-wise. Although active contour/surface models with geometric information constraint have been successfully applied to adult brain segmentation, they are not fully explored in the neonatal image segmentation. In this paper, we propose a novel neonatal image segmentation method by combining local intensity information, atlas spatial prior, and cortical thickness constraint in a single level-set framework. Besides, we also provide a robust and reliable tissue surface initialization for the proposed method by using a convex optimization technique. Thus, tissue segmentation, as well as inner and outer cortical surface reconstruction, can be obtained simultaneously. The proposed method has been tested on a large neonatal dataset, and the validation on 10 neonatal brain images (with manual segmentations) shows very promising results. PMID:21763443

  9. A Simple But Effective Evolutionary Algorithm for Complicated Optimization Problems

    E-print Network

    Xu, Y.G.

    A simple but effective evolutionary algorithm is proposed in this paper for solving complicated optimization problems. The new algorithm presents two hybridization operations incorporated with the conventional genetic ...

  10. Genetic algorithms in truss topological optimization

    Microsoft Academic Search

    P. Hajela; E. Lee

    1995-01-01

    The present paper describes the use of a stochastic search procedure that is the basis of genetic algorithms, in developing near-optimal topologies of load-bearing truss structures. The problem addressed is one wherein the structural geometry is created from a specification of load conditions and available support points in the design space. The development of this geometry must satisfy kinematic stability

  11. Optimization algorithm for compact slab lasers

    Microsoft Academic Search

    Changqing Cao; Xiaodong Zeng; Yuying An

    2010-01-01

    The pump structure greatly influences the characteristics of a diode side-pumped laser. To achieve high absorption efficiency and a homogeneous pump-beam distribution simultaneously, a systemic algorithm has been established to optimize the pump structure, where multiple reflections occur on the internal wall of the reflector inside the pump chamber. A novel design of an efficient, highly reliable, and good beam

  12. Reactive power optimization by genetic algorithm

    Microsoft Academic Search

    K. Iba

    1994-01-01

    This paper presents a new approach to optimal reactive power planning based on a genetic algorithm. Many outstanding methods to this problem have been proposed in the past. However, most these approaches have the common defect of being caught to a local minimum solution. The integer problem which yields integer value solutions for discrete controllers\\/banks still remain as a difficult

  13. Lightweight telescope structure optimized by genetic algorithm

    Microsoft Academic Search

    Mikio Kurita; Hiroshi Ohmori; Masashi Kunda; Hiroaki Kawamura; Noriaki Noda; Takayuki Seki; Yuji Nishimura; Michitoshi Yoshida; Shuji Sato; Tetsuya Nagata

    2010-01-01

    We designed the optics supporting structure (OSS) of a 3.8 m segmented mirror telescope by applying genetic algorithm optimization. The telescope is the first segmented mirror telescope in Japan whose primary mirror consists of 18 petal shaped segment mirrors. The whole mirror is supported by 54 actuators (3 actuators per each segment). In order to realize light-weight and stiff telescope

  14. Optimizing continuous berth allocation by immune algorithm

    Microsoft Academic Search

    Zhi-Hua Hu; Xiao-Long Han; Yi-Zhong Ding

    2009-01-01

    The continuous berth allocation problem (BAPC) solves the BAP with continuous berth space and continuous time to optimize the utilization of space and time of the ports. A nonlinear programming (NLP) model is built and an immune algorithm (IA) is proposed to solve it. The effects of the number of vessels, the berth space length and the length of planning

  15. Genetic Algorithms for Real Parameter Optimization

    Microsoft Academic Search

    Alden H. Wright

    1991-01-01

    This paper is concerned with the application of gen etic algorithms to optimization problems over several real parameters. It is shown that k-point crossover (for k small relative to the number of parameters) can be viewed as a crossover operation on the vector of parameters plus perturbations of some of the parameters. Mutation can also be co nsidered as a

  16. GENERALIZED OPTIMIZATION ALGORITHM FOR SPEECH RECOGNITION TRANSDUCERS

    E-print Network

    Allauzen, Cyril

    GENERALIZED OPTIMIZATION ALGORITHM FOR SPEECH RECOGNITION TRANSDUCERS Cyril Allauzen and Mehryar provide a common representation for the components of a speech recognition system. In previous work, we, determinization. However, not all weighted automata and transducers used in large- vocabulary speech recognition

  17. GENERALIZED OPTIMIZATION ALGORITHM FOR SPEECH RECOGNITION TRANSDUCERS

    E-print Network

    Mohri, Mehryar

    GENERALIZED OPTIMIZATION ALGORITHM FOR SPEECH RECOGNITION TRANSDUCERS Cyril Allauzen and Mehryar provide a common representation for the components of a speech recognition system. In previous work, we, determinization. However, not all weighted automata and transducers used in large­ vocabulary speech recognition

  18. Row-action Optimization Bregman's Algorithm

    E-print Network

    Sra, Suvrit

    Row-action Optimization Methods Bregman's Algorithm Suvrit Sra suvrit@cs.utexas.edu Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 Suvrit Sra, Univ. of Texas at Austin Cyclically enforces one constraint at a time ( ai, x bi). Suvrit Sra, Univ. of Texas at Austin ­ p.2/23 #12

  19. Decomposition and Nondifferentiable Optimization with the Projective Algorithm

    Microsoft Academic Search

    J. L. Goffin; A. Haurie; J. P. Vial

    1992-01-01

    This paper deals with an application of a variant of Karmarkar's projective algorithm for linear programming to the solution of a generic nondifferentiable minimization problem. This problem is closely related to the Dantzig-Wolfe decomposition technique used in large-scale convex programming. The proposed method is based on a column generation technique defining a sequence of primal linear programming maximization problems. Associated

  20. Algorithm for fixed-range optimal trajectories

    NASA Technical Reports Server (NTRS)

    Lee, H. Q.; Erzberger, H.

    1980-01-01

    An algorithm for synthesizing optimal aircraft trajectories for specified range was developed and implemented in a computer program written in FORTRAN IV. The algorithm, its computer implementation, and a set of example optimum trajectories for the Boeing 727-100 aircraft are described. The algorithm optimizes trajectories with respect to a cost function that is the weighted sum of fuel cost and time cost. The optimum trajectory consists at most of a three segments: climb, cruise, and descent. The climb and descent profiles are generated by integrating a simplified set of kinematic and dynamic equations wherein the total energy of the aircraft is the independent or time like variable. At each energy level the optimum airspeeds and thrust settings are obtained as the values that minimize the variational Hamiltonian. Although the emphasis is on an off-line, open-loop computation, eventually the most important application will be in an on-board flight management system.

  1. PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems

    E-print Network

    Suzuki, Jun

    PIBEA: Prospect Indicator Based Evolutionary Algorithm for Multiobjective Optimization Problems multiobjective optimization algorithm (EMOA) that uses a new quality indicator, called the prospect indicator, for parent selection and environmental selection operators. The prospect indicator measures the potential

  2. Algorithmic and Complexity Results for Cutting Planes Derived from Maximal Lattice-Free Convex Sets

    E-print Network

    Basu, Amitabh; Köppe, Matthias

    2011-01-01

    We study a mixed integer linear program with m integer variables and k non-negative continuous variables in the form of the relaxation of the corner polyhedron that was introduced by Andersen, Louveaux, Weismantel and Wolsey [Inequalities from two rows of a simplex tableau, Proc. IPCO 2007, LNCS, vol. 4513, Springer, pp. 1--15]. We describe the facets of this mixed integer linear program via the extreme points of a well-defined polyhedron. We then utilize this description to give polynomial time algorithms to derive valid inequalities with optimal l_p norm for arbitrary, but fixed m. For the case of m=2, we give a refinement and a new proof of a characterization of the facets by Cornuejols and Margot [On the facets of mixed integer programs with two integer variables and two constraints, Math. Programming 120 (2009), 429--456]. The key point of our approach is that the conditions are much more explicit and can be tested in a more direct manner, removing the need for a reduction algorithm. These results allow ...

  3. Tropical Convex Hull Computations

    Microsoft Academic Search

    Michael Joswig

    2008-01-01

    This is a survey on tropical polytopes from the combinatorial point of view and with a focus on algorithms. Tropical convexity is interesting because it relates a number of combinatorial concepts including ordinary convexity, monomial ideals, subdivisions of products of simplices, matroid theory, finite metric spaces, and the tropical Grassmannians. The relationship between these topics is explained via one running

  4. Non-convex optimization for the design of sparse FIR filters

    E-print Network

    Wei, Dennis

    This paper presents a method for designing sparse FIR filters by means of a sequence of p-norm minimization problems with p gradually decreasing from 1 toward 0. The lack of convexity for p < 1 is partially overcome by ...

  5. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  6. Stroke volume optimization: the new hemodynamic algorithm.

    PubMed

    Johnson, Alexander; Ahrens, Thomas

    2015-02-01

    Critical care practices have evolved to rely more on physical assessments for monitoring cardiac output and evaluating fluid volume status because these assessments are less invasive and more convenient to use than is a pulmonary artery catheter. Despite this trend, level of consciousness, central venous pressure, urine output, heart rate, and blood pressure remain assessments that are slow to be changed, potentially misleading, and often manifested as late indications of decreased cardiac output. The hemodynamic optimization strategy called stroke volume optimization might provide a proactive guide for clinicians to optimize a patient's status before late indications of a worsening condition occur. The evidence supporting use of the stroke volume optimization algorithm to treat hypovolemia is increasing. Many of the cardiac output monitor technologies today measure stroke volume, as well as the parameters that comprise stroke volume: preload, afterload, and contractility. PMID:25639574

  7. Wireless sensor network path optimization based on particle swarm algorithm

    Microsoft Academic Search

    Xia Zhu; Yulin Zhang

    2011-01-01

    This paper proposes a particle swarm optimization algorithm for Wireless Sensor Network (WSN) path optimization. It designs and increases the mutation operator. This algorithm can find effective optimization of WSN routing, not only the solution quality is superior to genetic algorithm, but also increases in the success rate. In experimental results verified that proposed PSO-WSN intelligent method can escape from

  8. Genetic Algorithms Compared to Other Techniques for Pipe Optimization

    Microsoft Academic Search

    Angus R. Simpson; Graeme C. Dandy; Laurence J. Murphy

    1994-01-01

    The genetic algorithm technique is a relatively new optimization tech- nique. In this paper we present a methodology for optimizing pipe networks using genetic algorithms. Unknown decision variables are coded as binary strings. We investigate a three-operator genetic algorithm comprising reproduction, crossover, and mutation. Results are compared with the techniques of complete enumeration and nonlinear programming. We apply the optimization

  9. The Use of Genetic Algorithms in Multilayer Mirror Optimization

    E-print Network

    Hart, Gus

    The Use of Genetic Algorithms in Multilayer Mirror Optimization Shannon Lunt R. S. Turley 2 Abstract We have applied the genetic algorithm to extreme ultraviolet (XUV) multilayer mirror optimization. We have adapted the genetic algorithm to design optimal bifunctional mirrors for the IMAGE

  10. Global Optimization Algorithms for Training Product Unit Neural

    E-print Network

    Neumaier, Arnold

    , and shows that particle swarm optimization, genetic algorithms and LeapFrog are eÃ?cient alternativesGlobal Optimization Algorithms for Training Product Unit Neural Networks A Ismaily and AP) is possibly the most popular optimization algorithm to train multilayer NNs. While GD has shown

  11. Experimental Comparisons of Derivative Free Optimization Algorithms1

    E-print Network

    Paris-Sud XI, Université de

    -ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchExperimental Comparisons of Derivative Free Optimization Algorithms1 A. Auger,, N. Hansen,, J. M-Box Optimization (BBO). 1 Invited Paper at the 8th International Symposium on Experimental Algorithms, June 3

  12. Distribution Systems Reconfiguration using a modified particle swarm optimization algorithm

    Microsoft Academic Search

    A. Y. Abdelaziz; F. M. Mohammed; S. F. Mekhamer; M. A. L. Badr

    2009-01-01

    This paper presents the particle swarm optimization (PSO) algorithm for solving the optimal distribution system reconfiguration problem for power loss minimization. The PSO is a relatively new and powerful intelligence evolution algorithm for solving optimization problems. It is a population-based approach. The PSO is originally inspired from the social behavior of bird flocks and fish schools. The proposed PSO algorithm

  13. DIRECT algorithm : A new definition of potentially optimal ...

    E-print Network

    chiter

    2005-08-26

    algorithm encounters The algorithm converges to the global optimal function value, if the objective ..... Future work should be done on numerical tests to compare ... [5] D. E. Finkel, C. T. Kelley, New Analysis of the DIRECT Algorithm, Copper.

  14. A Genetic Algorithm for Minimax Optimization Problems Jeffrey W. Herrmann

    E-print Network

    Herrmann, Jeffrey W.

    A Genetic Algorithm for Minimax Optimization Problems Jeffrey W. Herrmann Department of Mechanical-space genetic algorithm as a general technique to solve minimax optimization problems. This algorithm maintains of applications. To illustrate its potential, we use the two-space genetic algorithm to solve a parallel machine

  15. A fuzzy-controlled Hooke-Jeeves optimization algorithm

    Microsoft Academic Search

    Deepak Sankar Somasundaram; Mohamed B. Trabia

    2011-01-01

    This article presents an approach to enhance the Hooke-Jeeves optimization algorithm through the use of fuzzy logic. The Hooke-Jeeves algorithm, similar to many other optimization algorithms, uses predetermined fixed parameters. These parameters do not depend on the objective function values in the current search region. In the proposed algorithm, several fuzzy logic controllers are integrated at the various stages of

  16. Learning Computer Programs with the Bayesian Optimization Algorithm

    E-print Network

    Fernandez, Thomas

    of the Bayesian Optimization Algorithm (BOA), a probabilistic model building genetic algorithm, to the domain of program tree evolution. The new system, BOA programming (BOAP), improves significantly on previous algorithms, such as the (hierarchical) Bayesian Optimization Algorithm (BOA) [6]. BOA is asymptotically more

  17. Multi Swarm and Multi Best particle swarm optimization algorithm

    Microsoft Academic Search

    Junliang Li; Xinping Xiao

    2008-01-01

    This paper proposes a novel particle swarm optimization algorithm: Multi-Swarm and Multi-Best particle swarm optimization algorithm. The novel algorithm divides initialized particles into several populations randomly. After calculating certain generations respectively, every population is combined into one population and continues to calculate until the stop condition is satisfied. At the same time, the novel algorithm updates particlespsila velocities and positions

  18. Optimality Zone Algorithms for Hybrid Systems: Efficient Algorithms for Optimal Location and Control Computation

    Microsoft Academic Search

    Peter E. Caines; M. Shahid Shaikh

    2006-01-01

    \\u000a A general Hybrid Minimum Principle (HMP) for hybrid optimal control problems (HOCPs) is presented in [1, 2, 3, 4] and in [4,\\u000a 5], a class of efficient, provably convergent Hybrid Minimum Principle (HMP) algorithms were obtained based upon the HMP.\\u000a The notion of optimality zones (OZs) ([3, 4]) provides a theoretical framework for the computation of optimal location (i.e.\\u000a discrete

  19. Shuffled Frog Leaping Algorithm Based Optimal Reactive Power Flow

    Microsoft Academic Search

    Qingzheng Li

    2009-01-01

    Abstract-A new approach to ORPF (optimal reactive power flow) based on SFLA (shuffled frog leaping algorithm) is proposed. The algorithm approaches to solving ORPF problem are given. By applying the algorithm to dealing with IEEE 30-bus system, compared with the particle swarm optimization (PSO) algorithm and SGA(simple genetic algorithm),the experimental results show that the algorithm is indeed capable of obtaining

  20. Efficient algorithms for globally optimal trajectories

    Microsoft Academic Search

    John N. Tsitsiklis

    1995-01-01

    We present serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type. A vehicle starts at a prespecified point xo and follows a unit speed trajectory x(t) inside a region in ℛm until an unspecified time T that the region is

  1. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.

  2. A convex analysis approach for convex multiplicative programming

    Microsoft Academic Search

    Rúbia M. Oliveira; Paulo A. V. Ferreira

    2008-01-01

    Global optimization problems involving the minimization of a product of convex functions on a convex set are addressed in\\u000a this paper. Elements of convex analysis are used to obtain a suitable representation of the convex multiplicative problem\\u000a in the outcome space, where its global solution is reduced to the solution of a sequence of quasiconcave minimizations on\\u000a polytopes. Computational experiments

  3. An FPTAS for Minimizing a Class of Low-Rank Quasi-Concave Functions over a Convex Set

    Microsoft Academic Search

    Vineet Goyal; R. Ravi

    We consider minimizing a class of low rank quasi-concave functions over a convex set and give a fully polynomial time approximation scheme (FPTAS) for the problem. The algorithm is based on a binary search for the optimal objective value which is guided by solving a polynomial number of linear minimization problems over the convex set with appropriate objective functions. Our

  4. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  5. Exact and Approximate Sizes of Convex Datacubes

    NASA Astrophysics Data System (ADS)

    Nedjar, Sébastien

    In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.

  6. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  7. Maxima in Convex Regions Mordecai J. Golin*

    E-print Network

    Golin, Mordecai J.

    running time for some convex hull finding algorithms. The classic example is in the analysis of Gift C. The corresponding question for convex hull points has been well studied. Renyi and Sulanke [S] [9 of convex hull points is O(logn) while, if C is convex and has a doubly continuously-differentiable boundary

  8. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  9. Convex optimization for control analysis application to the steam generator water level

    Microsoft Academic Search

    Slim Hbaieb; Pascale Bendotti; Clement-Marc Falinower

    2002-01-01

    In this paper a convex synthesis methodology is applied to a benchmark problem. The objective is to control the water level in the steam generator of a Pressurized Water Reactor. Both robustness and performance constraints are specified for this benchmark. The proposed approach explicitly takes into account constraints in both time and frequency domains. In this method, specifications are expressed

  10. Practical iterative image reconstruction in digital breast tomosynthesis by non-convex TpV optimization

    E-print Network

    Kurien, Susan

    Practical iterative image reconstruction in digital breast tomosynthesis by non-convex Tp the limitation that the complex 3D breast structure is projected into a plane. Lesions can be obscured National Laboratory, Los Alamos, NM cMassachusetts General Hospital, Boston, MA ABSTRACT Digital breast

  11. Toward a Proof of Convergence for the Optimal Stepsize Algorithm

    E-print Network

    Keinan, Alon

    Toward a Proof of Convergence for the Optimal Stepsize Algorithm Peter Frazier Advisor: Warren Powell Department of Operations Research and Financial Engineering Princeton University Wednesday May 10 algorithm's rate of convergence. The optimal stepsize algorithm (OSA) developed by George and Powell [2

  12. Optimal Separable Algorithms to Compute the Reverse Euclidean Distance Transformation

    E-print Network

    Paris-Sud XI, Université de

    Optimal Separable Algorithms to Compute the Reverse Euclidean Distance Transformation and Discrete. In this paper, we present time optimal algorithms to solve the reverse Euclidean distance transformation algorithms to compute the error-free Euclidean Distance Transformation (EDT) for d-dimensional binary images

  13. Computationally efficient optimal power allocation algorithms for multicarrier communication systems

    Microsoft Academic Search

    Brian S. Krongold; Kannan Ramchandran; Douglas L. Jones

    2000-01-01

    We present an optimal, computationally efficient, integer-bit power allocation algorithm for discrete multitone modulation. Using efficient lookup table searches and a Lagrange-multiplier bisection search, our algorithm converges faster to the optimal solution than existing techniques and can replace the use of suboptimal methods because of its low computational complexity. Fast algorithms are developed for the data rate and performance margin

  14. Computationally efficient optimal power allocation algorithm for multicarrier communication systems

    Microsoft Academic Search

    Brian S. Krongold; Kannan Ramchandran; Douglas L. Jones

    1998-01-01

    We present an optimal, efficient power allocation algorithm for discrete multitone modulation (DMT). Using efficient lookup table searches and a Lagrange multiplier bisection search, our algorithm converges much faster to the optimal solution than existing techniques and can replace the use of suboptimal methods because of its low computational complexity. A fast algorithm is developed and a pseudocode is provided

  15. Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem

    E-print Network

    Ferris, Michael C.

    Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem Edward J optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our

  16. Towards a Genetic Algorithm for Function Optimization Sonja Novkovic

    E-print Network

    Towards a Genetic Algorithm for Function Optimization Sonja Novkovic and Davor Sverko Abstract: This article analyses a version of genetic algorithm (GA, Holland 1975) designed for function optimization, such as non-coding segments, elitist selection and multiple crossover. Key words: Genetic algorithm, Royal

  17. An Estimation of Distribution Particle Swarm Optimization Algorithm

    Microsoft Academic Search

    Mudassar Iqbal; Marco Antonio Montes De Oca

    2006-01-01

    In this paper we present an estimation of distribution par- ticle swarm optimization algorithm that borrows ideas from recent de- velopments in ant colony optimization which can be considered an es- timation of distribution algorithm. In the classical particle swarm opti- mization algorithm, particles exploit their individual memory to explore the search space. However, the swarm as a whole has

  18. Optimal algorithms and lower partial moment: ex post results

    Microsoft Academic Search

    David N. Nawrocki

    1991-01-01

    Portofolio management in the finance literature has typically used optimization algorithms to determine security allocations within a portfolio in order to obtain the best trade-off between risk and return. These algorithms, despite some improvements, are restrictive in terms of an investor's risk aversion (utility function). Since individual investors have different levels of risk aversion, this paper proposes two portfolio-optimization algorithms

  19. Optimizing Parametric BIST Using Bio-inspired Computing Algorithms

    Microsoft Academic Search

    Nastaran Nemati; Amirhossein Simjour; Amirali Ghofrani; Zainalabedin Navabi

    2009-01-01

    Optimizing the BIST configuration based on the characteristics of the design under test is a complicated and challenging work for test engineers. Since this problem has multiple optimization factors, trapping in local optimums is very plausible. Therefore, regular computing algorithms cannot efficiently resolve this problem and utilization of some algorithms is required. In this work, by applying genetic algorithm (GA)

  20. Optimal control of trading algorithms: a general impulse control approach

    E-print Network

    Paris-Sud XI, Université de

    inclusion of a new effect in the "optimal-control-oriented" original framework gave birth to a specificOptimal control of trading algorithms: a general impulse control approach Bruno Bouchard , Ngoc-day trading based on the control of trading algorithms. Given a generic parameterized algorithm, we control

  1. The Leap-Frog Algorithm and Optimal Control: Theoretical Aspects

    E-print Network

    Noakes, Lyle

    The Leap-Frog Algorithm and Optimal Control: Theoretical Aspects C. Yal#24;c#16;n Kaya School@maths.uwa.edu.au Abstract The Leap-Frog Algorithm was originally devised to #12;nd geodesics in connected complete with generalizing the mathematical rigour of the leap-frog algorithm to a class of optimal control problems

  2. A particle swarm optimization algorithm based on orthogonal design

    Microsoft Academic Search

    Jie Yang; Abdesselam Bouzerdoum; Son Lam Phung

    2010-01-01

    The last decade has witnessed a great interest in using evolutionary algorithms, such as genetic algorithms, evolutionary strategies and particle swarm optimization (PSO), for multivariate optimization. This paper presents a hybrid algorithm for searching a complex domain space, by combining the PSO and orthogonal design. In the standard PSO, each particle focuses only on the error propagated back from the

  3. Simulation of a new hybrid particle swarm optimization algorithm

    Microsoft Academic Search

    Mathew Mithra Noel; Thomas C. Jannett

    2004-01-01

    In this paper a new hybrid particle swarm optimization (PSO) algorithm is introduced which makes use of gradient information to achieve faster convergence without getting trapped in local minima. Simulation results comparing the standard PSO algorithm to the new hybrid PSO algorithm are presented. The De Jong test suite of optimization problems is used to test the performance of all

  4. Gaussian swarm: a novel particle swarm optimization algorithm

    Microsoft Academic Search

    Renato A. Krohling; Lehrstuhl Elektrische

    2004-01-01

    In this paper, a novel particle swarm optimization algorithm based on the Gaussian probability distribution is proposed. The standard particle swarm optimization (PSO) algorithm has some parameters that need to be specified before using the algorithm, e.g., the accelerating constants c1 and c2, the inertia weight w, the maximum velocity Vmax, and the number of particles of the swarm. The

  5. An evolutionary game based particle swarm optimization algorithm

    Microsoft Academic Search

    Wei-Bing Liu; Xian-Jia Wang

    2008-01-01

    Particle swarm optimization (PSO) is an evolutionary algorithm used extensively. This paper presented a new particle swarm optimizer based on evolutionary game (EGPSO). We map particles’ finding optimal solution in PSO algorithm to players’ pursuing maximum utility by choosing strategies in evolutionary games, using replicator dynamics to model the behavior of particles. And in order to overcome premature convergence a

  6. Hierarchical particle swarm optimizer for minimizing the non-convex potential energy of molecular structure.

    PubMed

    Cheung, Ngaam J; Shen, Hong-Bin

    2014-11-01

    The stable conformation of a molecule is greatly important to uncover the secret of its properties and functions. Generally, the conformation of a molecule will be the most stable when it is of the minimum potential energy. Accordingly, the determination of the conformation can be solved in the optimization framework. It is, however, not an easy task to achieve the only conformation with the lowest energy among all the potential ones because of the high complexity of the energy landscape and the exponential computation increasing with molecular size. In this paper, we develop a hierarchical and heterogeneous particle swarm optimizer (HHPSO) to deal with the problem in the minimization of the potential energy. The proposed method is evaluated over a scalable simplified molecular potential energy function with up to 200 degrees of freedom and a realistic energy function of pseudo-ethane molecule. The experimental results are compared with other six PSO variants and four genetic algorithms. The results show HHPSO is significantly better than the compared PSOs with p-value less than 0.01277 over molecular potential energy function. PMID:25459763

  7. Drilling path optimization by the particle swarm optimization algorithm with global convergence characteristics

    Microsoft Academic Search

    Guang-Yu Zhu; Wei-Bo Zhang

    2008-01-01

    Drilling path optimization is one of the key problems in holes-machining. This paper presents a new approach to solve the drilling path optimization problem belonging to discrete space, based on the particle swarm optimization (PSO) algorithm. Since the standard PSO algorithm is not guaranteed to be global convergent or local convergent, based on the mathematical model, the algorithm is improved

  8. Optimization Online Digest -- July 2012

    E-print Network

    Interior point methods for sufficient LCP in a wide neighborhood of the central path with ... A variable smoothing algorithm for solving convex optimization problems ... Mixed Integer Linear Programming Formulation Techniques ... POST

  9. Evolving the Structure of the Particle Swarm Optimization Algorithms

    Microsoft Academic Search

    Laura Diosan; Mihai Oltean

    2006-01-01

    A new model for evolving the structure of a Particle Swarm Optimization (PSO) algorithm is proposed in this paper. The model is a hybrid technique that combines a Genetic Algorithm (GA) and a PSO algorithm. Each GA chromosome is an array encoding a meaning for updating the particles of the PSO algorithm. The evolved PSO algo- rithm is compared to

  10. Modified artificial bee colony algorithm for reactive power optimization

    NASA Astrophysics Data System (ADS)

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-05-01

    Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.

  11. Development and Optimization of Regularized Tomographic Reconstruction Algorithms Utilizing

    E-print Network

    Soatto, Stefano

    in object or Fourier domain, which unavoidably introduces noise in the reconstructed images [4, 6]. A post1 Development and Optimization of Regularized Tomographic Reconstruction Algorithms Utilizing two new algorithms for tomographic reconstruction which incorporate the technique of Equally- Sloped

  12. The Use of Genetic Algorithms in Multilayer Mirror Optimization

    E-print Network

    Hart, Gus

    The Use of Genetic Algorithms in Multilayer Mirror Optimization by Shannon Lunt March 1999 of the Chromosomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6 Flow chart of the Genetic Algorithm.7 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Genetic

  13. Two improved harmony search algorithms for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Jaberipour, Majid; Khorram, Esmaile

    2010-11-01

    This paper describes two new harmony search (HS) meta-heuristic algorithms for engineering optimization problems with continuous design variables. The key difference between these algorithms and traditional (HS) method is in the way of adjusting bandwidth (bw). bw is very important factor for the high efficiency of the harmony search algorithms and can be potentially useful in adjusting convergence rate of algorithms to optimal solution. First algorithm, proposed harmony search (PHS), introduces a new definition of bandwidth (bw). Second algorithm, improving proposed harmony search (IPHS) employs to enhance accuracy and convergence rate of PHS algorithm. In IPHS, non-uniform mutation operation is introduced which is combination of Yang bandwidth and PHS bandwidth. Various engineering optimization problems, including mathematical function minimization problems and structural engineering optimization problems, are presented to demonstrate the effectiveness and robustness of these algorithms. In all cases, the solutions obtained using IPHS are in agreement or better than those obtained from other methods.

  14. A Distributed Particle Swarm Optimization Algorithm for Swarm Robotic Applications

    Microsoft Academic Search

    James M. Hereford

    2006-01-01

    We have derived a version of the particle swarm optimization algorithm that is suitable for a swarm consisting of a large number of small, mobile robots. The algorithm, called the distributed PSO (dPSO), is for \\

  15. Optimizing two-pass connected-component labeling algorithms

    Microsoft Academic Search

    Kesheng Wu; Ekow J. Otoo; Kenji Suzuki

    2009-01-01

    Wepresent two optimization strategiestoimprove connected component labeling algorithms. Taking together, they form an efficient two-pass labeling algorithm that is fast and theoretically optimal. The first optimization strategy re- duces the number of neighboring pixels accessed through the use of a decision tree, and the second one streamlines the union-find algorithms usedto trackequivalentlabels.Weshow that thefirst strategyreducestheaveragenumber of neighbors accessed by a

  16. Optimization of Algorithms for Ion Mobility Calculations

    SciTech Connect

    Shvartsburg, Alexandre A.; Mashkevich, Stefan V.; Baker, Erin Shammel; Smith, Richard D.

    2007-02-15

    Ion mobility spectrometry (IMS) is increasingly employed to probe the structures of gas-phase ions, particularly those of proteins and other biological macromolecules. This process involves comparing measured mobilities with those computed for potential geometries, which requires evaluation of orientationally averaged cross sections using some approximate treatment of ion-buffer gas collisions. Two common models are the Projection Approximation (PA) and Exact Hard-Spheres Scattering (EHSS) that represent ions as collections of hard spheres. Though calculations for large ions and/or conformer ensembles take significant time, no algorithmic optimization had been explored. Previous EHSS programs were dominated by ion rotation operations that allow orientational averaging. We have developed two new algorithms for PA and EHSS calculations: one simplifies those operations and greatly reduces their number, and the other disposes of them altogether by propagating trajectories from a random origin. The new algorithms were tested for a representative set of seven ion geometries including diverse sizes and shapes. While the best choice depends on the geometry in a non-obvious way, the difference between the two codes is generally modest. Both are much more efficient than the existing software, for example faster than the widely used Mobcal (implementing EHSS) ~10 - 30 fold.

  17. Genetic algorithm and particle swarm optimization combined with Powell method

    NASA Astrophysics Data System (ADS)

    Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui

    2013-10-01

    In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.

  18. Bacterial Foraging Optimization Algorithm: Theoretical Foundations, Analysis, and Applications

    Microsoft Academic Search

    Swagatam Das; Arijit Biswas; Sambarta Dasgupta; Ajith Abraham

    2009-01-01

    Bacterial foraging optimization algorithm (BFOA) has been widely accepted as a global optimization algorithm of current interest\\u000a for distributed optimization and control. BFOA is inspired by the social foraging behavior of Escherichia coli. BFOA has already drawn the attention of researchers because of its efficiency in solving real-world optimization problems\\u000a arising in several application domains. The underlying biology behind the

  19. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  20. Optimal path planning in Rapid Prototyping based on genetic algorithm

    Microsoft Academic Search

    Yang Weidong

    2009-01-01

    One of important researches in rapid prototyping (RP) is to optimize the path planning which affects the efficiency and building quality of RP system. But it is very difficult to solve its optimization by traditional methods. Genetic algorithms (GAs) are excellent approaches to solving these complex problems in optimization with difficult constraints. The classic path-planning optimization problem has been shown

  1. Optimizing clustering algorithm in mobile ad hoc networks using genetic algorithmic approach

    Microsoft Academic Search

    Damla Turgut; Sajal K. Das; Ramez Elmasri; Begumhan Turgut

    2002-01-01

    We show how genetic algorithms can be useful in enhancing the performance of clustering algorithms in mobile ad hoc networks. In particular, we optimize our recently proposed weighted clustering algorithm (WCA). The problem formulation along with the parameters are mapped to individual chromosomes as input to the genetic algorithmic technique. Encoding the individual chromosomes is an essential part of the

  2. Routing Optimization Heuristics Algorithms for Urban Solid Waste Transportation Management

    Microsoft Academic Search

    NIKOLAOS V. KARADIMAS; NIKOLAOS DOUKAS

    2008-01-01

    During the last decade, metaheuristics have become increasingly popular for effectively confronting difficult combinatorial optimization problems. In the present paper, two individual meatheuristic algorithmic solutions, the ArcGIS Network Analyst and the Ant Colony System (ACS) algorithm, are introduced, implemented and discussed for the identification of optimal routes in the case of Municipal Solid Waste (MSW) collection. Both proposed applications are

  3. The Routing Optimization Based on Improved Artificial Fish Swarm Algorithm

    Microsoft Academic Search

    Xiaojuan Shan; Mingyan Jiang; Jingpeng Li

    2006-01-01

    A novel algorithm named artificial fish swarm algorithm (AFSA) is discussed to solve the problem of routing optimization in computer communication networks. An improved AFSA (IAFSA) with the taboo table and a new parameter is proposed to increase the global optimum capability and the neighborhood search ability of AFSA. A mathematical model of routing optimization based on the minimal time

  4. Optimization of combined cycle power plants using evolutionary algorithms

    Microsoft Academic Search

    Christoph Koch; Frank Cziesla; George Tsatsaronis

    2007-01-01

    This paper deals with the application of an evolutionary algorithm to the minimization of the product cost of complex combined cycle power plants. Both the design configuration (process structure) and the process variables are optimized simultaneously. The optimization algorithm can choose among several design options included in a superstructure of the power plant such as different gas turbine systems available

  5. Chaotically encoded particle swarm optimization algorithm and its applications

    Microsoft Academic Search

    Bilal Alatas; Erhan Akin

    2009-01-01

    This paper proposes a novel particle swarm optimization (PSO) algorithm, chaotically encoded particle swarm optimization algorithm (CENPSOA), based on the notion of chaos numbers that have been recently proposed for a novel meaning to numbers. In this paper, various chaos arithmetic and evaluation measures that can be used in CENPSOA have been described. Furthermore, CENPSOA has been designed to be

  6. A dynamic inertia weight particle swarm optimization algorithm

    Microsoft Academic Search

    Bin Jiao; Zhigang Lian; Xingsheng Gu

    2008-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a

  7. An improved particle swarm optimization algorithm for unit commitment

    Microsoft Academic Search

    B. Zhao; C. X. Guo; B. R. Bai; Y. J. Cao

    2006-01-01

    This paper presents an improved particle swarm optimization algorithm (IPSO) for power system unit commitment. IPSO is an extension of the standard particle swarm optimization algorithm (PSO) which uses more particles’ information to control the mutation operation, and is similar to the social society in that a group of leaders could make better decisions. The convergence property of the proposed

  8. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  9. An Improved Particle Swarm Optimization Algorithm Mimicking Territorial Dispute Between Groups for Multimodal Function Optimization Problems

    Microsoft Academic Search

    Jang-Ho Seo; Chang-Hwan Im; Sang-Yeop Kwak; Cheol-Gyun Lee; Hyun-Kyo Jung

    2008-01-01

    In the present paper, an improved particle swarm optimization (PSO) algorithm for multimodal function optimization is proposed. The new algorithm, named auto-tuning multigrouped PSO (AT-MGPSO) algorithm mimics natural phenomena in ecosystem such as territorial dispute between different group members and immigration of weak groups, resulting in automatic determination of the size of each group's territory and robust convergence. The usefulness

  10. A Double Smoothing Technique for Constrained Convex ...

    E-print Network

    2011-02-27

    Key words. convex optimization, fast gradient methods, complexity theory, smoothing tech- .... method [19] for smooth and strongly convex functions and describe its rate of conver- ...... Boolean variables (this is a well-known NP-hard problem).

  11. Performance evaluation of TRIBES, an adaptive particle swarm optimization algorithm

    Microsoft Academic Search

    Yann Cooren; Maurice Clerc; Patrick Siarry

    2009-01-01

    This paper presents a study of the performance of TRIBES, an adaptive particle swarm optimization algorithm. Particle Swarm\\u000a Optimization (PSO) is a biologically-inspired optimization method. Recently, researchers have used it effectively in solving\\u000a various optimization problems. However, like most optimization heuristics, PSO suffers from the drawback of being greatly\\u000a influenced by the selection of its parameter values. Thus, the common

  12. Discrete Optimization Algorithms in Real-Time Visual Tracking

    Microsoft Academic Search

    Miguel A. Patricio; Iván Dotú; Jesús García; Antonio Berlanga; José M. Molina

    2009-01-01

    In this work we introduce a novel formulation of the association problem in visual tracking systems as a discrete optimization problem. The full data association problem is formulated as a search for the best tracking configuration to match hypothesis. We have implemented three local search algorithms: Hill Climbing, Simulated Annealing, and Tabu Search algorithms. These algorithms are guided by heuristic

  13. An Optimal Coarse-grained Arc Consistency Algorithm

    E-print Network

    Paris-Sud XI, Université de

    An Optimal Coarse-grained Arc Consistency Algorithm Christian Bessiere LIRMM-CNRS (UMR 5506) 161 the propagation in an efficient and effec- tive fashion. There are two classes of propagation algorithms for general constraints: fine-grained algorithms where the removal of a value for a variable

  14. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  15. Distributed Genetic Algorithms with New Sharing Approach Multiobjective Optimization Problems

    E-print Network

    Coello, Carlos A. Coello

    Distributed Genetic Algorithms with New Sharing Approach Multiobjective Optimization Problems@mail.doshisha.ac.jp sin@mikilab.doshisha.ac.jp 1 Abstract­ this paper, a new distributed genetic algorithm multiobjective and those in the relationship of trade­off. genetic algorithm powerful timization methods based mechanics

  16. On convexity of H-infinity Riccati solutions

    NASA Technical Reports Server (NTRS)

    Li, X. P.; Chang, B. C.

    1991-01-01

    The authors revealed several important eigen properties of the stabilizing solutions of the two H-infinity Riccati equations and their product. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H-infinity norm. Two examples are used to illustrate the algorithms.

  17. Quantum-Behaved Particle Swarm Optimization Algorithm with Controlled Diversity

    Microsoft Academic Search

    Jun Sun; Wenbo Xu; Wei Fang

    2006-01-01

    \\u000a Premature convergence, the major problem that confronts evolutionary algorithms, is also encountered with the Particle Swarm\\u000a Optimization (PSO) algorithm. In the previous work [11], [12], [13], the Quantum-behaved Particle Swarm (QPSO) is proposed.\\u000a This novel algorithm is a global-convergence-guaranteed and has a better search ability than the original PSO. But like other\\u000a evolutionary optimization technique, premature in the QPSO is

  18. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  19. Optimizing qubit Hamiltonian parameter estimation algorithms using PSO

    E-print Network

    Alexandr Sergeevich; Stephen D. Bartlett

    2012-06-18

    We develop qubit Hamiltonian single parameter estimation techniques using a Bayesian approach. The algorithms considered are restricted to projective measurements in a fixed basis, and are derived under the assumption that the qubit measurement is much slower than the characteristic qubit evolution. We optimize a non-adaptive algorithm using particle swarm optimization (PSO) and compare with a previously-developed locally-optimal scheme.

  20. A New Optimized GA-RBF Neural Network Algorithm

    PubMed Central

    Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666

  1. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666

  2. A Probabilistic Analysis of a Simplified Biogeography-Based Optimization Algorithm

    E-print Network

    Simon, Dan

    A Probabilistic Analysis of a Simplified Biogeography-Based Optimization Algorithm Dan Simon 13, 2009 Abstract Biogeography-based optimization (BBO) is a population-based evolutionary algorithm decreases. Key Words ­ biogeography-based optimization, evolutionary algorithms, probability, Markov

  3. A Genetic Algorithm for Multiobjective Design Optimization in Aerodynamics and

    E-print Network

    Coello, Carlos A. Coello

    A Genetic Algorithm for Multiobjective Design Optimization in Aerodynamics and Electromagnetics R. The objective functions in the optimization problem measure the aerodynamic feasibil­ ity based on the drag been optimized with respect to only one discipline such as aerodynamics or electromagnetics. Although

  4. Optimal management of MicroGrid using Bacterial Foraging Algorithm

    Microsoft Academic Search

    R. Noroozian; H. Vahedi

    2010-01-01

    This paper focused on optimal operating strategy and cost optimization scheme for a MicroGrid by using Bacterial Foraging Algorithm. Prior to the optimization of the microgrid itself, the system model components from some real manufactural data are constructed. The proposed cost function takes into consideration the costs of the emissions NOx, SO2, and CO2 as well as the operation and

  5. A Parallel Particle Swarm Optimization Algorithm with Communication Strategies

    Microsoft Academic Search

    Jui-fang Chang; Shu-chuan Chu; John F. Roddick; Jeng-shyang Pan

    2005-01-01

    Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy

  6. Optimization Online - Integer Programming Submissions - 2012

    E-print Network

    A conic representation of the convex hull of disjunctive sets and conic cuts for integer ... Two-stage Models and Algorithms for Optimizing Infrastructure Design and ... Solving mixed integer nonlinear programming problems for mine production ...

  7. A VU-proximal point algorithm for minimization Robert Mi in and Claudia Sagastiz abal y

    E-print Network

    Mifflin, Robert

    A VU-proximal point algorithm for minimization Robert Mi#15;in #3; and Claudia Sagastiz#19;abal y June 1, 2002 Abstract For convex optimization, VU-space decomposition algorithms make a minimization. Keywords Convex minimization, proximal points, bundle methods, VU-decomposition. 1 Introduction. Motivation

  8. Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization

    PubMed Central

    Schutte, Jaco F.; Koh, Byung; Reinbolt, Jeffrey A.; Haftka, Raphael T.; George, Alan D.; Fregly, Benjamin J.

    2006-01-01

    Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353

  9. Optimal and nearly optimal algorithms for approximating polynomial zeros

    Microsoft Academic Search

    V. Y. Pan

    1996-01-01

    We substantially improve the known algorithms for approximating all the complex zeros of an nth degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of Schönhage [1], Pan [2], and Neff and Reif [3]. In parallel (NC) implementation, we dramatically decrease the number of processors, versus the parallel algorithm of Neff

  10. Convexity, complexity, and high dimensions

    Microsoft Academic Search

    Stanislaw J. Szarek

    We discuss metric, algorithmic and geometric issues related to broadly understood complexity of high dimensional convex sets. The specific topics we bring up include metric entropy and its duality, derandomization of constructions of normed spaces or of convex bodies, and different fundamental questions related to geometric diversity of such bodies, as measured by various isomorphic (as opposed to isometric) invariants.

  11. Krill herd: A new bio-inspired optimization algorithm

    NASA Astrophysics Data System (ADS)

    Gandomi, Amir Hossein; Alavi, Amir Hossein

    2012-12-01

    In this paper, a novel biologically-inspired algorithm, namely krill herd (KH) is proposed for solving optimization tasks. The KH algorithm is based on the simulation of the herding behavior of krill individuals. The minimum distances of each individual krill from food and from highest density of the herd are considered as the objective function for the krill movement. The time-dependent position of the krill individuals is formulated by three main factors: (i) movement induced by the presence of other individuals (ii) foraging activity, and (iii) random diffusion. For more precise modeling of the krill behavior, two adaptive genetic operators are added to the algorithm. The proposed method is verified using several benchmark problems commonly used in the area of optimization. Further, the KH algorithm is compared with eight well-known methods in the literature. The KH algorithm is capable of efficiently solving a wide range of benchmark optimization problems and outperforms the exciting algorithms.

  12. Particle Swarm Optimization Algorithm Based on the Idea of Simulated Annealing

    Microsoft Academic Search

    DONG Chaojun; QIU Zulian

    2006-01-01

    Summary Particle swarm optimization (PSO) algorithm is a new population intelligence algorithm and has good performance on optimization. After the standard PSO algorithm and the idea of simulated annealing algorithm had been analyzed, the acceptance of Metropolis rule by probability in the simulated annealing algorithm was introduced in the algorithm of PSO. The simulated annealing-particle swarm optimization was presented. Simulation

  13. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  14. Polyhedral approximations of strictly convex compacta

    E-print Network

    Balashov, Maxim V

    2011-01-01

    We consider polyhedral approximations of strictly convex compacta in finite dimensional Euclidean spaces (such compacta are also uniformly convex). We obtain the best possible estimates for errors of considered approximations in the Hausdorff metric. We also obtain new estimates of an approximate algorithm for finding the convex hulls.

  15. The Convex Hull of Freeform Surfaces

    Microsoft Academic Search

    Joon-kyung Seong; Gershon Elber; John K. Johnstone; Myung-soo Kim

    2004-01-01

    We present an algorithm for computing the convex hull of freeform rational surfaces. The convex hull problem is reformulated as one of finding the zero-sets of polynomial equations; using these zero-sets we characterize developable surface patches and planar patches that belong to the boundary of the convex hull.

  16. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  17. Average case analysis of dynamic geometric optimization

    E-print Network

    Eppstein, David

    for constructing geometric structures such as convex hulls and ar- rangements. Such algorithms can also be used structures: convex hulls, arrangements, and the like. However problems of geometric optimization have been ne spanning tree of a planar point set, as points are inserted or deleted, in O(log3 n) expected time per

  18. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  19. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  20. Improved PSO algorithms for electromagnetic optimization

    Microsoft Academic Search

    L. Matekovits; M. Mussetta; P. Pirinoli; S. Selleri; R. E. Zich

    2005-01-01

    Some variations over the basic particle swarm algorithm are here proposed, aimed at a more efficient search over the solution space and exhibiting a negligible overhead in complexity and speed. The proposed algorithms are then applied to the test case of a microwave filter to show their superior capabilities with respect to the conventional algorithm.

  1. SPIRAL out of convexity: sparsity-regularized algorithms for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Marcia, Roummel F.; Willett, Rebecca M.

    2010-01-01

    The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be accomplished by minimizing a conventional l2-l1 objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse representation. The optimization formulation considered in this paper uses a negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem. In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective function at each iteration and computationally efficient partition-based multiscale estimation methods.

  2. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  3. Algorithm 896: LSA: Algorithms for large-scale optimization

    Microsoft Academic Search

    Ladislav Luksan; Ctirad Matonoha; Jan Vlcek

    2009-01-01

    We present 14 basic Fortran subroutines for large-scale unconstrained and box constrained optimization and large-scale systems of nonlinear equations. Subroutines PLIS and PLIP, intended for dense general optimization problems, are based on limited-memory variable metric methods. Subroutine PNET, also intended for dense general optimization problems, is based on an inexact truncated Newton method. Subroutines PNED and PNEC, intended for sparse

  4. Application of a gradient-based algorithm to structural optimization

    E-print Network

    Ghisbain, Pierre

    2009-01-01

    Optimization methods have shown to be efficient at improving structural design, but their use is limited in the engineering practice by the difficulty of adapting state-of-the-art algorithms to particular engineering ...

  5. Provably Good Approximation Algorithms for Optimal Kinodynamic Planning: Robots with

    E-print Network

    Richardson, David

    Provably Good Approximation Algorithms for Optimal Kinodynamic Planning: Robots with Decoupled-7501 Patrick Xavier Sandia National Laboratories, Albuquerque NM 87185-0951 Keywords: robot motion planning, kinodynamics, polyhedral obstacles Abstract: We consider the following problem: given a robot system, nd

  6. Optimal scaling of the ADMM algorithm for distributed quadratic ...

    E-print Network

    2014-12-11

    Dec 11, 2014 ... algorithm for a class of distributed quadratic programming problems. ... Numerical simulations justify our results and highlight the benefits of optimally .... the past iterates when computing the next. ...... E. Chu, B. Peleato, and J. Eckstein, “

  7. Genetic Algorithms applications to optimization and system identification

    E-print Network

    Lin, Yun-Jeng

    1998-01-01

    Genetic Algorithms (GA) are very different from the traditional optimization techniques. GA is a new generation of artificial intelligence and its principles mimic the behavior of the biologic genes in the natural world. Its execution is simple...

  8. Lossless Convexification of a Class of Optimal Control Problems with Non-Convex Control Constraints

    E-print Network

    Williams, Brian C.

    are needed for manned and robotic missions to Mars, the Moon, and asteroids. The optimal control problems of Technology. Government sponsorship acknowledged. E-mail Addresses: behcet@jpl.nasa.gov (B. A¸cikme¸se), lars@jpl.nasa

  9. A methodology to ensure local mass conservation for porous media models under finite element formulations based on convex optimization

    NASA Astrophysics Data System (ADS)

    Chang, J.; Nakshatrala, K.

    2014-12-01

    It is well know that the standard finite element methods, in general, do not satisfy element-wise mass/species balance properties. It is, however, desirable to have element-wide mass balance property in subsurface modeling. Several studies over the years have aimed to overcome this drawback of finite element formulations. Currently, a post-processing optimization-based methodology is commonly employed to recover the local mass balance for porous media models. However, such a post-processing technique does not respect the underlying variational structure that the finite element formulation may enjoy. Motivated by this, a consistent methodology to satisfy element-wise local mass balance for porous media models is constructed using convex optimization techniques. The assembled system of global equations is reconstructed into a quadratic programming problem subjected to bounded equality constraints that ensure conservation at the element level. The proposed methodology can be applied to any computational mesh and to any non-locally conservative nodal-based finite element method. Herein, we integrate our proposed methodology into the framework of the classical mixed Galerkin formulation using Taylor-Hood elements and the least-squares finite element formulation. Our numerical studies will include computational cost, numerical convergence, and comparision with popular methods. In particular, it will be shown that the accuracy of the solutions is comparable with that of several popular locally conservative finite element formulations like the lowest order Raviart-Thomas formulation. We believe the proposed optimization-based approach is a viable approach to preserve local mass balance on general computational grids and is amenable for large-scale parallel implementation.

  10. Parallel projected variable metric algorithms for unconstrained optimization

    NASA Technical Reports Server (NTRS)

    Freeman, T. L.

    1989-01-01

    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.

  11. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  12. Optimization of image processing algorithms on mobile platforms

    Microsoft Academic Search

    Pramod Poudel; Mukul Shirvaikar

    2011-01-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time

  13. Increasing EtherCAT performance using frame size optimization algorithm

    Microsoft Academic Search

    Mladen Knezic; Branko Dokic; Zeljko Ivanovic

    2011-01-01

    EtherCAT protocol is a popular Real-Time Ethernet solution that offers the highest communication efficiency in a number of operating conditions. However, for huge networks with several hundreds of devices distributed in the field, EtherCAT performance can become a critical factor. In this paper, we propose a solution for increasing EtherCAT performance using a frame size optimization algorithm. The optimization algorithm

  14. Optimizing Hydropower Reservoir Operation Using Hybrid Genetic Algorithm and Chaos

    Microsoft Academic Search

    Chun-Tian Cheng; Wen-Chuan Wang; Dong-Mei Xu; K. W. Chau

    2008-01-01

    Genetic algorithms (GA) have been widely applied to solve water resources system optimization. With the increase of the complexity\\u000a and the larger problem scale of water resources system, GAs are most frequently faced with the problems of premature convergence,\\u000a slow iterations to reach the global optimal solution and getting stuck at a local optimum. A novel chaos genetic algorithm\\u000a (CGA)

  15. Standard Harmony Search Algorithm for Structural Design Optimization

    Microsoft Academic Search

    Kang Seok Lee

    Most engineering optimization algorithms are based on numerical linear and nonlinear programming methods that require substantial\\u000a gradient information and usually seek to improve the solution in the neighborhood of a starting point. These algorithms, however,\\u000a reveal a limited approach to complicated real-world optimization problems. If there is more than one local optimum in the\\u000a problem, the result may depend on

  16. An Improved Particle Swarm Optimization Algorithm with Disturbance Term

    Microsoft Academic Search

    Qingyuan He; Chuanjiu Han

    2006-01-01

    \\u000a The standard particle swarm optimization (PSO) algorithm, existing improvements and their influence to the performance of\\u000a standard PSO are introduced. The framework of PSO basic formula is analyzed. Implied by its three-term structure, the inherent\\u000a shortcoming that trends to local optima is indicated. Then a modified velocity updating formula of particle swarm optimization\\u000a algorithm is declared. The addition of the

  17. Reliability-Based Optimization Using Evolutionary Algorithms

    E-print Network

    Deb, Kalyanmoy

    Uncertainties in design variables and problem parameters are often inevitable and must be considered in an optimization task if reliable optimal solutions are sought. Besides a number of sampling techniques, there exist ...

  18. The Vector Model of Artificial Physics Optimization Algorithm for Global Optimization Problems

    Microsoft Academic Search

    Liping Xie; Jianchao Zeng; Zhuihua Cui

    2009-01-01

    To solve complex global optimization problems, Artificial Physics Optimization (APO) algorithm is presented based on Physicomimetics framework, which is a population-based stochastic algorithm inspired by physical force. The solutions (particles) sampled from the feasible region of the problems are treated as physical individuals. Each individual has a mass, position and velocity. The mass of each individual corresponds to a user-defined

  19. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  20. Controlling Convergence of Space-Mapping Algorithms for Engineering Optimization

    Microsoft Academic Search

    Slawomir Koziel; John W. Bandler

    2007-01-01

    The problem of convergence properties of space mapping optimization algorithms is addressed. A new weighting scheme in the parameter extraction procedure is introduced that allows us to control the behavior of the space mapping algorithm and force it to converge after a reasonable number of fine model evaluations. An application example is provided.

  1. Optimal reactive power dispatch using an adaptive genetic algorithm

    Microsoft Academic Search

    Q. H. Wu; Y. J. Cao; J. Y. Wen

    1998-01-01

    This paper presents an adaptive genetic algorithm (AGA) for optimal reactive power dispatch and voltage control of power systems. In the adaptive genetic algorithm, the probabilities of crossover and mutation, pc and pm, are varied depending on the fitness values of the solutions and the normalized fitness distances between the solutions in the evolution process to prevent premature convergence and

  2. Dynamic Particle Swarm Optimization Algorithm for Resolution of Overlapping Chromatograms

    Microsoft Academic Search

    Yufeng Li

    2009-01-01

    Dynamic particle swarm optimization algorithm is proposed in this paper to resolve overlapping chromatographic peaks. To accelerate the convergence speed, clustering degree and evolution velocity are considered simultaneously to adjust inertia weight adaptively. The algorithm is tested on both simulated overlapping chromatographic peaks which are based on exponential modified Gaussian convolution model and experimental overlapping chromatographic peaks of multi-component which

  3. MultiObjective Optimization by Genetic Algorithms : A Review

    E-print Network

    Coello, Carlos A. Coello

    Multi­Objective Optimization by Genetic Algorithms : A Review Hisashi Tamaki Department of Electrical and Electronics Engineering, Kobe University, Rokkodai, Nada­ku, Kobe 657, Japan. tamaki, Japan. kobayasi@int.titech.ac.jp Abstract--- This paper reviews several genetic algorithm (GA

  4. A Bee Colony Optimization Algorithm for Traveling Salesman Problem

    Microsoft Academic Search

    Li-pei Wong; Malcolm Yoke-hean Low; Chin Soon Chong

    2008-01-01

    A bee colony optimization (BCO) algorithm for traveling salesman problem (TSP) is presented in this paper. The BCO model is constructed algorithmically based on the collective intelligence shown in bee foraging behaviour. Experimental results comparing the proposed BCO model with some existing approaches on a set of benchmark problems are presented.

  5. Genetic algorithm optimization applied to electromagnetics: a review

    Microsoft Academic Search

    Daniel S. Weile; Eric Michielssen

    1997-01-01

    Genetic algorithms are on the rise in electromagnetics as design tools and problem solvers because of their versatility and ability to optimize in complex multimodal search spaces. This paper describes the basic genetic algorithm and recounts its history in the electromagnetics literature. Also, the application of advanced genetic operators to the field of electromagnetics is described, and design results are

  6. Serial and Parallel Genetic Algorithms as Function Optimizers

    Microsoft Academic Search

    V. Scott Gordon; L. Darrell Whitley

    1993-01-01

    Parallel genetic algorithms are often very differentfrom the "traditional" genetic algorithmproposed by Holland, especially withregards to population structure and selectionmechanisms. In this paper we compare severalparallel genetic algorithms across a widerange of optimization functions in an attemptto determine whether these changes have positiveor negative impact on their problemsolvingcapabilities. The findings indicatethat the parallel structures perform as well asor ...

  7. Multiobjective Optimization by Nessy Algorithm Mario Kppen1

    E-print Network

    Coello, Carlos A. Coello

    different from Neuro-GA approaches are its redefined genetic operators. To ensure the neuron based implementation of the Nessy algorithm uses only one neuron in the output layer. This is due to the fact. By using more than one output neuron, the Nessy algorithm is well-suited to multi-objective optimization

  8. Frankenstein's PSO: A Composite Particle Swarm Optimization Algorithm

    Microsoft Academic Search

    Marco Antonio Montes de Oca; Thomas Stützle; Mauro Birattari; Marco Dorigo

    2009-01-01

    During the last decade, many variants of the original particle swarm optimization (PSO) algorithm have been proposed. In many cases, the difference between two variants can be seen as an algorithmic component being present in one variant but not in the other. In the first part of the paper, we present the results and insights obtained from a detailed empirical

  9. Wavelet Threshold Optimization with Artificial Fish Swarm Algorithm

    Microsoft Academic Search

    Mingyan Jiang; Dongfeng Yuan

    2005-01-01

    The artificial fish swarm algorithm (AFSA) is discussed in this paper, and an improved optimal wavelet threshold algorithm is presented based on AFSA in signal denoising processing. Simulations show that our method has better performance than the conventional wavelet-based denoising threshold method

  10. Augmented Lagrangian Algorithm for Optimizing Analog Circuit Design

    E-print Network

    Eindhoven, Technische Universiteit

    derivatives of the design metrics and numerical noise is in- herently present (for instance due to adaptive to approximate derivatives. One of the two optimization algorithms available in Adapt is the Nelder{Mead (NM NM. The Nelder{Mead algorithm is very robust but has rather poor performance characteristics

  11. Simplex Optimization Localization Algorithm for Wireless Sensor Networks

    Microsoft Academic Search

    Shaoping Zhang; Guohui Li

    2010-01-01

    Accurate, distributed localization algorithms are needed for large scale dense wireless sensor network applications. The Nelder-Mead Simplex Optimization Method (SOM) is applied to solve nondifferentiable problems and can often handle high discontinuity, particularly if it does not occur near the solution. This article proposes a distributed iterative multilateral algorithm. It formulates a function that presents the sum of range errors

  12. An improved particle swarm optimization algorithm for flowshop scheduling problem

    Microsoft Academic Search

    Changsheng Zhang; Jigui Sun; Xingjun Zhu; Qingyun Yang

    2008-01-01

    The flowshop scheduling problem has been widely studied and many techniques have been applied to it, but few algorithms based on particle swarm optimization (PSO) have been proposed to solve it. In this paper, an improved PSO algorithm (IPSO) based on the “alldifferent” constraint is proposed to solve the flow shop scheduling problem with the objective of minimizing makespan. It

  13. Particle Swarm Optimization Algorithm in Signal Detection and Blind Extractio

    Microsoft Academic Search

    Ying Zhao; Junli Zheng

    2004-01-01

    The particle swarm optimization (PSO) algorithm, which originated as a simulation of a simplified social system, is an evolutionary computation technique. In this paper the binary and real-valued versions of PSO algorithm are exploited in two important signal processing paradigm: multiuser detection (MUD) and blind extraction of sources (BES), respectively. The novel approaches are effective and efficient with parallel processing

  14. A simulated annealing algorithm for constrained Multi-Objective Optimization

    Microsoft Academic Search

    Hemant Kumar Singh; Amitay Isaacs; Tapabrata Ray; Warren Smith

    2008-01-01

    In this paper, we introduce a simulated annealing algorithm for constrained Multi-Objective Optimization (MOO). When searching in the feasible region, the algorithm behaves like recently proposed Archived Multi-Objective Simulated Annealing (AMOSA) algorithm [1], whereas when operating in the infeasible region, it tries to minimize constraint violation by moving along Approximate Descent Direction (ADD) [2]. An Archive of non-dominated solutions found

  15. A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain

    Microsoft Academic Search

    P. Robertson; E. Villebrun; P. Hoeher

    1995-01-01

    For estimating the states or outputs of a Markov process, the symbol-by-symbol MAP algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a high number of additions and multiplications. MAP like algorithms operating in the logarithmic domain presented in the past solve the numerical

  16. What About Wednesday? Approximation Algorithms for Multistage Stochastic Optimization

    Microsoft Academic Search

    Anupam Gupta; Martin Pál; Ramamoorthi Ravi; Amitabh Sinha

    2005-01-01

    The field of stochastic optimization studies decision making under uncertainty, when only probabilistic information about the future is available. Finding approximate solutions to well-studied optimization problems (such as Steiner tree, Vertex Cover, and Facility Location, to name but a few) presents new challenges when investigated in this frame- work, which has promoted much research in approximation algorithms. There has been

  17. Algorithms for the Electrical Optimization of Digital MOS Circuits

    E-print Network

    North Carolina at Chapel Hill, University of

    to the transistor sizes in the circuit; no changes in the circuit struc- ture, number of gates or clocking are introduced. Linear algorithms are presented for computing optimal transistor sizes to minimize delay, area through the circuit and computes the optimal transistor sizes to achieve the performance objectives

  18. Hedging Uncertainty: Approximation Algorithms for Stochastic Optimization Problems

    Microsoft Academic Search

    R. Ravi; Amitabh Sinha

    2004-01-01

    We study the design of approximation algorithms for stoch- astic combinatorial optimization problems. We formulate the problems in the framework of two-stage stochastic optimization, and provide nearly tight approximations. Our problems range from the simple (shortest path, vertex cover, bin packing) to complex (facility location, set cover), and contain representatives with different approximation ratios. The approximation ratio of the stochastic

  19. Using modifications to Grover's Search algorithm for quantum global optimization

    Microsoft Academic Search

    Yipeng Liu; Gary J. Koehler

    2010-01-01

    We study the problem of finding a global optimal solution to discrete optimization problems using a heuristic based on quantum computing methods. (Knowledge of quantum computing ideas is not necessary to read this paper.) We focus on a successful quantum computing method introduced by Baritompa, Bulger, and Wood, that we refer to as the BBW algorithm, and develop two modifications.

  20. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  1. Using neural networks to speed up optimization algorithms

    Microsoft Academic Search

    M. Bazan; S. Russenschuck

    2000-01-01

    The paper presents the application of Radial-basis-function (RBF) neural networks to speed up deterministic search algorithms used for the design and optimization of superconducting LHC magnets. The optimization of the iron yoke of the main dipoles requires a number of numerical field computations per trial solution as the field quality depends on the excitation of the magnets. This results in

  2. Mesh Adaptive Direct Search Algorithms for Constrained Optimization

    Microsoft Academic Search

    Charles Audet; J. E. Dennis Jr.

    2006-01-01

    This paper introduces the Mesh Adaptive Direct Search (MADS) class of algorithms for nonlinear optimization. MADS extends the Generalized Pattern Search (GPS) class by allowing local exploration, called polling, in an asymptotically dense set of directions in the space of optimization variables. This means that under certain hypotheses, including a weak constraint qualification due to Rockafellar, MADS can treat constraints

  3. Multiobjective Evolutionary Algorithm for Software Project Portfolio Optimization

    E-print Network

    optimization, Pro- ject portfolio management 1. INTRODUCTION Project selection problem (PSP) [1, 6 for the optimization. The complexity of the PSP is based on the often high number of projects from which a subset has portfolio should ad- here to. The PSP is NP-hard problem [3] so there is no exact algorithm that solves

  4. Algorithmic funnel-and-gate system design optimization

    Microsoft Academic Search

    Claudius M. Bürger; Peter Bayer; Michael Finkel

    2007-01-01

    Funnel-and-gate systems (FGSs), which constitute a common variant of permeable reactive barriers used for in situ treatment of groundwater, pose particular challenges to the task of design optimization. Because of the complex interplay of funnels and gates, the evolutionary algorithms applied have to cope with multimodality, nonseparability, and nonlinearity of the optimization task. We analyze these features in a test

  5. Genetic Algorithms for Optimal Scheduling of Chlorine Dosing in Water

    E-print Network

    Coello, Carlos A. Coello

    for determining the optimal schedule of chlorine dosing within a water distribution system considering multiple-based method), is in progress. INTRODUCTION Controlling the levels of chlorine within the distribution systemoz343 Genetic Algorithms for Optimal Scheduling of Chlorine Dosing in Water Distribution Systems

  6. A grid algorithm for bound constrained optimization of noisy functions

    E-print Network

    Neumaier, Arnold

    Nelder- Mead in the noisy case. If performance is measured solely by the number of function evaluations in the optimization of experiments), the new algorithm is also significantly faster than Nelder-Mead. Revised version, February 1995 KEY WORDS: bound constrained optimization, noisy functions, Nelder-Mead method, quasi

  7. Simplex Algorithm Math 364: Principles of Optimization, Lecture 8

    E-print Network

    Li, Haijun

    ... xn , b = b1 b2 ... bm Haijun Li Math 364: Principles of Optimization, Lecture 8 Spring = b1 b2 ... bm Note that some variables xis may be slack and/or excess variables. Haijun Li MathSimplex Algorithm Math 364: Principles of Optimization, Lecture 8 Haijun Li lih

  8. PARALLEL ALGORITHMS FOR A MULTI-LEVEL NETWORK OPTIMIZATION PROBLEM

    E-print Network

    Cruz, Frederico

    -integer programming; G.2.2. [Discrete Mathematics]: Graph Theory-network problems; G.4.[Mathematics of ComputingPARALLEL ALGORITHMS FOR A MULTI-LEVEL NETWORK OPTIMIZATION PROBLEM F.R.B. CRUZa, * and G.R. MATEUSb­MG, Brazil (Received 15 February 2000; In final form 12 March 2001) Multi-level network optimization (MLNO

  9. Convex Nondifferentiable Optimization: a Survey Focussed on the Analytic Center Cutting Plane Method

    Microsoft Academic Search

    J.-L. Goffin; JEAN-PHILIPPE VIAL

    1999-01-01

    We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a self-contained convergence analysis, that uses the formalism of the theory of self-concordant fucntions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis

  10. The Application of Hybrid Genetic Particle Swarm Optimization Algorithm in the Distribution Network Reconfigurations Multi-Objective Optimization

    Microsoft Academic Search

    Caiqing Zhang; Jingjing Zhang; Xihua Gu

    2007-01-01

    According to the single performance of most distribution network reconfigurations (DNR), this paper presents the multi-objective distribution network optimization model with the optimal network loss, load balancing, and power supply voltage. Combined with the evolution idea of genetic algorithm (GA) and population intellectual technique of particle swarm optimization (PSO) algorithm, it applies hybrid genetic particle swarm optimization algorithm (HGPSOA) to

  11. Imperialist competitive algorithm combined with chaos for global optimization

    NASA Astrophysics Data System (ADS)

    Talatahari, S.; Farahmand Azar, B.; Sheikholeslami, R.; Gandomi, A. H.

    2012-03-01

    A novel chaotic improved imperialist competitive algorithm (CICA) is presented for global optimization. The ICA is a new meta-heuristic optimization developed based on a socio-politically motivated strategy and contains two main steps: the movement of the colonies and the imperialistic competition. Here different chaotic maps are utilized to improve the movement step of the algorithm. Seven different chaotic maps are investigated and the Logistic and Sinusoidal maps are found as the best choices. Comparing the new algorithm with the other ICA-based methods demonstrates the superiority of the CICA for the benchmark functions.

  12. Quantum-behaved particle swarm optimization algorithm for economic load dispatch of power system

    Microsoft Academic Search

    Zhisheng Zhang

    2010-01-01

    Quantum-behaved particle swarm optimization algorithm is firstly used in economic load dispatch of power system in this paper. Quantum-behaved particle swarm optimization algorithm is the integration of particle swarm optimization algorithm and quantum computing theory. The superposition characteristic and probability representation of quantum methodology are combined into particle swarm optimization algorithm. This can make a single particle be expressed by

  13. OPTIMIZATION OF LONG RURAL FEEDERS USING A GENETIC ALGORITHM

    SciTech Connect

    Wishart, Michael; Ledwich, Gerard; Ghosh, Arindam [Queensland University of Technology, Brisbane, Queensland (Australia); Ivanovich, Grujica [Ergon Energy, Toowoomba, Queensland (Australia)

    2010-06-15

    This paper describes the optimization of conductor size and the voltage regulator location and magnitude of long rural distribution lines. The optimization minimizes the lifetime cost of the lines, including capital costs and losses while observing voltage drop and operational constraints using a Genetic Algorithm (GA). The GA optimization is applied to a real Single Wire Earth Return (SWER) network in regional Queensland and results are presented.

  14. Ant Colony Learning Algorithm for Optimal Control

    Microsoft Academic Search

    Jelmer Marinus van Ast; Robert Babuska; Bart De Schutter

    2010-01-01

    \\u000a Ant colony optimization (ACO) is an optimization heuristic for solving combinatorial optimization problems and is inspired\\u000a by the swarming behavior of foraging ants. ACO has been successfully applied in various domains, such as routing and scheduling.\\u000a In particular, the agents, called ants here, are very efficient at sampling the problem space and quickly finding good solutions.\\u000a Motivated by the advantages

  15. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  16. A Discrete Lagrangian Algorithm for Optimal Routing Problems

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E. [University of Peloponnese, 22100 Tripoli (Greece)

    2008-11-06

    The ideas of discrete Lagrangian methods for conservative systems are exploited for the construction of algorithms applicable in optimal ship routing problems. The algorithm presented here is based on the discretisation of Hamilton's principle of stationary action Lagrangian and specifically on the direct discretization of the Lagrange-Hamilton principle for a conservative system. Since, in contrast to the differential equations, the discrete Euler-Lagrange equations serve as constrains for the optimization of a given cost functional, in the present work we utilize this feature in order to minimize the cost function for optimal ship routing.

  17. An Optimal Technology Mapping Algorithm for Delay Optimization in Lookup-Table Based FPGA Designs

    E-print Network

    Cong, Jason "Jingsheng"

    An Optimal Technology Mapping Algorithm for Delay Optimization in Lookup-Table Based FPGA Designs-Map, that optimally solves the LUT-based FPGA technology mapping problem for depth minimization for general Boolean ASIC designs. The LUT-based FPGA is a popular archi- tecture used by several FPGA manufacturers

  18. Optimization of Signal Processing Algorithms Raza Ahmed and Brian L. Evans

    E-print Network

    Evans, Brian L.

    algorithms. Our prototype environment is written in Mathematica. 1 Introduction We optimize signal processingOptimization of Signal Processing Algorithms Raza Ahmed and Brian L. Evans razaa implementations of one-dimensional and multidimensional signal processing algorithms by rewriting subexpressions

  19. Efficient algorithms for robustness in matroid optimization

    SciTech Connect

    Frederickson, G.N.; Solis-Oba, R. [Purdue Univ., West Layfayette, IN (United States)

    1997-06-01

    The robustness function of a matroid measures the maximum increase in the weight of its minimum weight bases that can be obtained by increases of a given total cost on the weights of its elements. We present an algorithm for computing this function, that runs in strongly polynomial time for matroids in which independence can be tested in strongly polynomial time. We identify key properties of transversal, scheduling, and partition matroids, and exploit them to design robustness algorithms that are more efficient than our general algorithm.

  20. Statistically Optimal Combination of Algorithms Marek Petrik

    E-print Network

    Shenoy, Prashant

    is with regard to a training set of weighted instances that represent the domain. This reserach has been supported in part by the grant VEGA 1/0131/03 #12;First, we define a framework in which the algorithms

  1. A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

    E-print Network

    Roos, Kees

    complexity. AMS Subject Classification: 90C22, 90C31 1 Introduction Second-order conic optimization (SOCO feasible, without mentioning y. It is well-known that SOCO problems include linear and convex quadratic programs as special cases. On the other hand, SOCO problems are special cases of semidefinite optimization

  2. Optimal multisensor decision fusion of mine detection algorithms

    NASA Astrophysics Data System (ADS)

    Liao, Yuwei; Nolte, Loren W.; Collins, Leslie M.

    2003-09-01

    Numerous detection algorithms, using various sensor modalities, have been developed for the detection of mines in cluttered and noisy backgrounds. The performance for each detection algorithm is typically reported in terms of the Receiver Operating Characteristic (ROC), which is a plot of the probability of detection versus false alarm as a function of the threshold setting on the output decision variable of each algorithm. In this paper we present multi-sensor decision fusion algorithms that combine the local decisions of existing detection algorithms for different sensors. This offers, in certain situations, an expedient, attractive and much simpler alternative to "starting over" with the redesign of a new algorithm which fuses multiple sensors at the data level. The goal in our multi-sensor decision fusion approach is to exploit complimentary strengths of existing multi-sensor algorithms so as to achieve performance (ROC) that exceeds the performance of any sensor algorithm operating in isolation. Our approach to multi-sensor decision fusion is based on optimal signal detection theory, using the likelihood ratio. We consider the optimal fusion of local decisions for two sensors, GPR (ground penetrating radar) and MD (metal detector). A new robust algorithm for decision fusion is presented that addresses the problem that the statistics of the training data is not likely to exactly match the statistics of the test data. ROC's are presented and compared for real data.

  3. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. A Genetic Algorithm Approach to Multiple-Response Optimization

    SciTech Connect

    Ortiz, Francisco; Simpson, James R.; Pignatiello, Joseph J.; Heredia-Langner, Alejandro

    2004-10-01

    Many designed experiments require the simultaneous optimization of multiple responses. A common approach is to use a desirability function combined with an optimization algorithm to find the most desirable settings of the controllable factors. However, as the problem grows even moderately in either the number of factors or the number of responses, conventional optimization algorithms can fail to find the global optimum. An alternative approach is to use a heuristic search procedure such as a genetic algorithm (GA). This paper proposes and develops a multiple-response solution technique using a GA in conjunction with an unconstrained desirability function. The GA requires that several parameters be determined in order for the algorithm to operate effectively. We perform a robust designed experiment in order to tune the genetic algorithm to perform well regardless of the complexity of the multiple-response optimization problem. The performance of the proposed GA method is evaluated and compared with the performance of the method that combines the desirability with the generalized reduced gradient (GRG) optimization. The evaluation shows that only the proposed GA approach consistently and effectively solves multiple-response problems of varying complexity.

  5. A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms

    PubMed Central

    Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  6. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    PubMed

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  7. Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n ) Update Time

    Microsoft Academic Search

    Gerth Stølting Brodal; Riko Jacob

    The dynamic maintenance of the convex hull of a set of points in the plane is one of the most important problems in computational\\u000a geometry. We present a data structure supporting point insertions in amortized O(log n · log log log n) time, point deletions in amortized O(log n · log log n) time, and various queries about the convex

  8. Approximation algorithms for combinatorial optimization under uncertainty

    E-print Network

    Minkoff, Maria, 1976-

    2003-01-01

    Combinatorial optimization problems arise in many fields of industry and technology, where they are frequently used in production planning, transportation, and communication network design. Whereas in the context of classical ...

  9. A SEQUENTIAL QUADRATIC OPTIMIZATION ALGORITHM WITH ...

    E-print Network

    2013-08-22

    then its corresponding plot begins in the middle of the graph.) In Figures 5.1 and .... Programming. Optimization Methods and Software, 23(2):197–213, 2008. ... AMPL: A Modeling Language for Mathematical ... PhD thesis, Graduate School.

  10. Application of coevolutionary genetic algorithms for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Liu, Jian-guo; Li, Zu-shu; Wu, Wei-ping

    2007-12-01

    Multiobjective optimization is clearly one of the most important classes of problems in science and engineering. The solution of real problem involved in multiobjective optimization must satisfy all optimization objectives simultaneously, and in general the solution is a set of indeterminacy points. The task of multiobjective optimization is to estimate the distribution of this solution set, then to find the satisfying solution in it. Many methods solving multiobjective optimization using genetic algorithm have been proposed in recent twenty years. But these approaches tend to work negatively, causing that the population converges to small number of solutions due to the random genetic drift. To avoid this phenomenon, a multiobjective coevolutionary genetic algorithm (MoCGA) for multiobjective optimization is proposed. The primary design goal of the proposed approach is to produce a reasonably good approximation of the true Pareto front of a problem. In the algorithms, each objective corresponds to a population. At each generation, these populations compete among themselves. An ecological population density competition equation is used for reference to describe the relation between multiple objectives and to direct the adjustment over the relation at individual and population levels. The proposed approach store the Pareto optimal point obtained along the evolutionary process into external set. The proposed approach is validated using Schaffer's test function f II and it is compared with the Niched Pareto GA (nPGA). Simulation experiments prove that the algorithm has a better performance in finding the Pareto solutions, and the MoCGA can have advantages over the other algorithms under consideration in convergence to the Pareto-optimal front.

  11. CONVEX_HULL—A pascal program for determining the convex hull for planar sets

    NASA Astrophysics Data System (ADS)

    Yamamoto, Jorge Kazuo

    1997-08-01

    Computer aided graphical display of geological data is usually based on a regular grid, interpolated from a scattered data set. However, the interpolation function is valid only inside the domain of sampling points, or a closed boundary which limits all the sampling points. This closed boundary, named convex hull, can be determined with the aid of an algorithm. The convex hull of a planar set of points is defined as the minimum area convex polygon containing all the points. This paper presents a review of current methods for determining the convex hull, and the computer program CONVEX_HULL, written in Pascal language and based on a new algorithm.

  12. Optimizing welding sequence with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kadivar, M. H.; Jafarpur, K.; Baradaran, G. H.

    The genetic algorithm method has been utilized with a thermomechanical model to determine an optimum welding sequence. The thermomechanical model developed for this purpose predicts residual stress and distortion in thin plates. The thermal history of the plate is computed using a transient two-dimensional finite element model which serves as an input to the mechanical analysis. The mechanical response of the plate is estimated through a thermoelastic-viscoplastic finite element model. The proposed model is verified by comparison with the experimental data where available. By choosing the appropriate objective function for the considered case, an optimum welding sequence is determined by a genetic algorithm.

  13. The Guided Improvement Algorithm for Exact, General-Purpose, Many-Objective Combinatorial Optimization

    E-print Network

    Jackson, Daniel

    2009-07-03

    This paper presents a new general-purpose algorithm for exact solving of combinatorial many-objective optimization problems. We call this new algorithm the guided improvement algorithm. The algorithm is implemented on top ...

  14. Using neural networks to speed up optimization algorithms

    NASA Astrophysics Data System (ADS)

    Bazan, M.; Russenschuck, S.

    2000-11-01

    The paper presents the application of Radial-basis-function (RBF) neural networks to speed up deterministic search algorithms used for the design and optimization of superconducting LHC magnets. The optimization of the iron yoke of the main dipoles requires a number of numerical field computations per trial solution as the field quality depends on the excitation of the magnets. This results in computation times of about 30 minutes for each objective function evaluation (on a DEC-Alpha 600/333) and only the most robust (deterministic) optimization algorithms can be applied. Using a RBF function approximator, the achieved speed-up of the search algorithm is in the order of 25% for problems with two parameters and about 18% for problems with three and five design variables.

  15. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  16. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  17. Solving Multiobjective Optimization Problems using Evolutionary Algorithm

    E-print Network

    Coello, Carlos A. Coello

    . Abbass, and Charles Newton School of Computer Science, University of New South Wales, ADFA Campus to be useful for solving MOPs (Zitzler and Thiele 1999). EAs have some advantages over traditional results when compared with the Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler and Thiele 1999

  18. Terminating Decision Algorithms Optimally Tuomas Sandholm

    E-print Network

    Gordon, Geoffrey J.

    a probability estimate that a solution exists. Let us define the following symbols: SOLt ="Solution found by time t" (so, if a solution is found at time t, then SOLt = 1 for all t t), and NOSOLt ="No solution a statis- tical performance profile, p(SOLt|Y ), of the algorithm, i.e., the probability of finding

  19. Optimal Speedup of Las Vegas Algorithms

    Microsoft Academic Search

    Michael Luby; Alistair Sinclair; David Zuckerman

    1993-01-01

    Let A be a Las Vegas algorithm, i.e., A is a randomized algorithmthat always produces the correct answer when it stops but whose runningtime is a random variable. We consider the problem of minimizingthe expected time required to obtain an answer from A using strategieswhich simulate A as follows: run A for a fixed amount of timet 1 , then

  20. Groundwater Remediation Strategy Using Global Optimization Algorithms

    E-print Network

    Neumaier, Arnold

    ; Algorithms; Ground-water management. Introduction The contamination of groundwater is a widespread problem. DOI: 10.1061/ ASCE 0733-9496 2002 128:6 431 CE Database keywords: Ground water; Remedial action Jonoski2 ; and Dimitri P. Solomatine3 Abstract: The remediation of groundwater contamination by pumping

  1. A training algorithm for optimal margin classifiers

    Microsoft Academic Search

    Bernhard E. Boser; Isabelle M. Guyon; Vladimir N. Vapnik

    1992-01-01

    A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of

  2. DERIVATIVE-FREE OPTIMIZATION Algorithms, software and

    E-print Network

    Grossmann, Ignacio E.

    METHODS ­ Direct local search » Nelder-Mead simplex algorithm » Generalized pattern search and generating-and-fit #12;8 cx 2x 1x 3x rx Reflection ex Expansion cox Contraction inside Contraction outside cix NELDER-MEAD SEARCH FMINSEARCH (Nelder-Mead) DAKOTA PATTERN (PPS) HOPSPACK (PPS) SID-PSM (Simplex gradient PPS) NOMAD

  3. Restarted local search algorithms for continuous black box optimization.

    PubMed

    Pošík, Petr; Huyer, Waltraud

    2012-01-01

    Several local search algorithms for real-valued domains (axis parallel line search, Nelder-Mead simplex search, Rosenbrock's algorithm, quasi-Newton method, NEWUOA, and VXQR) are described and thoroughly compared in this article, embedding them in a multi-start method. Their comparison aims (1) to help the researchers from the evolutionary community to choose the right opponent for their algorithm (to choose an opponent that would constitute a hard-to-beat baseline algorithm), (2) to describe individual features of these algorithms and show how they influence the algorithm on different problems, and (3) to provide inspiration for the hybridization of evolutionary algorithms with these local optimizers. The recently proposed Comparing Continuous Optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that in low dimensional spaces, the old method of Nelder and Mead is still the most successful among those compared, while in spaces of higher dimensions, it is better to choose an algorithm based on quadratic modeling, such as NEWUOA or a quasi-Newton method. PMID:22779407

  4. Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.

    2006-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.

  5. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  6. An optimal on-line algorithm for metrical task system

    Microsoft Academic Search

    Allan Borodin; Nathan Linial; Michael E. Saks

    1992-01-01

    In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general on-line decision algorithm is developed. It is shown that, for an important class of special cases, this algorithm is optimal among all on-line

  7. Multiobjective Optimization by a Modified Artificial Immune System Algorithm

    Microsoft Academic Search

    Fabio Freschi; Maurizio Repetto

    2005-01-01

    http:\\/\\/www.polito.it\\/cadema Abstract. The aim of this work is to propose and validate a new multi- objective optimization algorithm based on the emulation of the immune system behavior. The rationale of this work is that the artificial im- mune system has, in its elementary structure, the main features required by other multiobjective evolutionary algorithms described in literature. The proposed approach is

  8. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    SciTech Connect

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  9. Particle Swarm Optimization Algorithm for Permutation Flowshop Sequencing Problem

    Microsoft Academic Search

    Mehmet Fatih Tasgetiren; Mehmet Sevkli; Yun-chia Liang; Gunes Gencyilmaz

    2004-01-01

    \\u000a This paper presents a particle swarm optimization algorithm (PSO) to solve the permutation flowshop sequencing problem (PFSP)\\u000a with makespan criterion. Simple but very efficient local search based on the variable neighborhood search (VNS) is embedded\\u000a in the PSO algorithm to solve the benchmark suites in the literature. The results are presented and compared to the best known\\u000a approaches in the

  10. Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm

    PubMed Central

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848

  11. Chaos time series prediction based on membrane optimization algorithms.

    PubMed

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (?, m) and least squares support vector machine (LS-SVM) (?, ?) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  12. Optimization Algorithm for the Generation of ONCV Pseudopotentials

    E-print Network

    Schlipf, Martin

    2015-01-01

    We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z=83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials provide a good agreement with the all-electron results obtained using the FLEUR code with a plane-wave energy cutoff of approximately 60 Ry.

  13. UROP Joint Project Nature-inspired algorithms such as Particle Swarm Optimization and Evolutionary

    E-print Network

    Krause, Rolf

    UROP Joint Project Nature-inspired algorithms such as Particle Swarm Optimization and Evolutionary Algorithms are among the most powerful algorithms for global optimization problems. In 2009 Yang formulated optimization like techniques. 2) Summary of the results and description of the Lévy-flight firefly algorithm. 3

  14. Optimal classification of standoff bioaerosol measurements using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar

    2011-05-01

    Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.

  15. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  16. Benchmarking Derivative-Free Optimization Algorithms

    E-print Network

    2008-05-13

    of this investigation, such as the requirement of bounds by imfil [8, 15]. ..... optimization of noisy functions, IMA Journal of Numerical Analysis, 15 (1995), pp. 585–608 ... [11] Genetha Anne Gray, Tamara G. Kolda, Ken Sale, and Malin M. Young,.

  17. Wind Mill Pattern Optimization using Evolutionary Algorithms

    E-print Network

    as many wind turbines as possible in a given area is inefficient. There are different wake vortex models to discover techniques for efficiently installing wind farms both onshore and offshore have increased dramatically; Gon- zalez's recent review [2] lists almost 150 bibliographic refer- ences for the optimal wind-turbine

  18. A software-optimized encryption algorithm

    Microsoft Academic Search

    Phillip Rogaway; Don Coppersmith

    We describe a fast, software-oriented, encryption algorithm. Computational cost on a 32-bit processor is about 5 elementary machine instructions per byte of text. The cipher is a pseudorandom function; under control of a key (first pre-processed into an internal table) it stretches a short index into a much longer pseudorandom string. This string can be used as a one-time pad.

  19. A Software-Optimized Encryption Algorithm

    Microsoft Academic Search

    Phillip Rogaway; Don Coppersmith

    1998-01-01

    We describe the software-efficient encryption algorithm SEAL 3.0. Com- putational cost on a modern 32-bit processor is about 4 clock cycles per byte of text. The cipher is a pseudorandom function family: under control of a key (first preprocessed into an internal table) it stretches a 32-bit position index into a long, pseudorandom string. This string can be used as

  20. Spacecraft thermal design with the Generalized Extremal Optimization Algorithm

    Microsoft Academic Search

    Roberto L. Galski; Fabiano L. De Sousa; Fernando M. Ramos; Issamu Muraoka

    2007-01-01

    This article describes an application of the Generalized Extremal Optimization (GEO) algorithm to the inverse design of a spacecraft thermal control system. GEO is a recently proposed global search meta-heuristic (Sousa, F.L. and Ramos, F.M., 2002, Function optimization using extremal dynamics. In: Proceedings of the 4th International Conference on Inverse Problems in Engineering (cd-rom), Rio de Janeiro, Brazil.; Sousa, F.L.,

  1. Faster optimal parallel prefix circuits: New algorithmic construction

    Microsoft Academic Search

    Yen-chun Lin; Chin-yu Su

    2005-01-01

    Parallel prefix circuits are parallel prefix algorithms on the combinational circuit model. A prefix circuit with n inputs is depth-size optimal if its depth plus size equals 2n-2. Smaller depth implies faster computation, while smaller size implies less power consumption, less VLSI area, and less cost. To be of practical use, the depth and fan-out of a depth-size optimal prefix

  2. Optimization of heat pump using fuzzy logic and genetic algorithm

    Microsoft Academic Search

    Arzu ?encan ?ahin; Bayram K?l?ç; Ula? K?l?ç

    Heat pumps offer economical alternatives of recovering heat from different sources for use in various industrial, commercial\\u000a and residential applications. In this study, single-stage air-source vapor compression heat pump system has been optimized\\u000a using genetic algorithm (GA) and fuzzy logic (FL). The necessary thermodynamic properties for optimization were calculated\\u000a by FL. Thermodynamic properties obtained with FL were compared with actual

  3. A Service Self-Optimization Algorithm based on Autonomic Computing

    Microsoft Academic Search

    Ruijuan Zheng; Mingchuan Zhang; Qingtao Wu; Guanfeng Li; Wangyang Wei

    2009-01-01

    Under the intrusion or abnormal attack, how to autonomously supply undergraded service to users is the ultimate goal of network security technology. Firstly, combined with martingale difference principle, a service self optimization algorithm based on autonomic computing-S2OAC is proposed. Secondly, according to the prior self optimizing knowledge and parameter information of inner environment, S2OAC searches the convergence trend of self

  4. A derivative-free optimization algorithm based on conditional moments

    NASA Astrophysics Data System (ADS)

    Wang, Xiaogang; Liang, Dong; Feng, Xingdong; Ye, Lu

    2007-07-01

    In this paper we propose a derivative-free optimization algorithm based on conditional moments for finding the maximizer of an objective function. The proposed algorithm does not require calculation or approximation of any order derivative of the objective function. The step size in iteration is determined adaptively according to the local geometrical feature of the objective function and a pre-specified quantity representing the desired precision. The theoretical properties including convergence of the method are presented. Numerical experiments comparing with the Newton, Quasi-Newton and trust region methods are given to illustrate the effectiveness of the algorithm.

  5. A modified particle swarm optimization algorithm and its application in optimal power flow problem

    Microsoft Academic Search

    Cui-Ru Wang; He-Jin Yuan; Zhi-Qiang Huang; Jiang-Wei Zhang; Chen-Jun Sun

    2005-01-01

    A modified particle swarm optimization (MPSO) algorithm is presented. In the new algorithm, particles not only studies from itself and the best one but also from other individuals. By this enhanced study behavior, the opportunity to find the global optimum is increased and the influence of the initial position of the particles is decreased At last, the method adopting MPSO

  6. Implementation and analysis of an optimized rainfalling watershed algorithm

    NASA Astrophysics Data System (ADS)

    De Smet, Patrick; Pires, Rui Luis V. P. M.

    2000-04-01

    In this paper we discuss a new implementation of a floating point based rainfalling watershed algorithm. First, we analyze and compare our proposed algorithm and its implementation with two implementations based on the well-known discrete Vincent- Soille flooding watershed algorithms. Next, we show that by carefully designing and optimizing our algorithm a memory (bandwidth) efficient and high speed implementation can be realized. We report on timing and memory usage results for different compiler settings, computer systems and algorithmic parameters. Our optimized implementation turns out to be significantly faster than the two Vincent-Soille based implementations with which we compare. Finally, we include some segmentation results to illustrate that visually acceptable and almost identical segmentation results can always be obtained for all algorithms being compared. And, we also explain how, in combination with other pre- or post- processing techniques, the problem of oversegmentation (a typical problem of all raw watershed algorithms) can be (partially) overcome. All these properties make that our proposed implementation is an excellent candidate for use in various practical applications where high speed performance and/or efficient memory usage is needed.

  7. Optimal and sub-optimal maximum a posteriori algorithms suitable for turbo decoding

    Microsoft Academic Search

    Patrick Robertson; Peter Hoeher; Emmanuelle Villebrun

    1997-01-01

    For estimating the states or outputs of a Markov process, the symbol-by-symbol maximum a posteriori(MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficultiesbecause of numerical representation problems, the necessity of non-linear functions and a high number ofadditions and multiplications. MAP like algorithms operating in the logarithmic domain presented in thepast solve the numerical problem

  8. A social learning particle swarm optimization algorithm for scalable optimization

    E-print Network

    Jin, Yaochu

    into particle swarm optimization (PSO) to develop a social learning PSO (SL-PSO). Unlike classical PSO variants in the proposed SL-PSO learns from any better particles (termed demonstrators) in the current swarm. In addition, to ease the burden of parameter settings, the proposed SL-PSO adopts a dimension-dependent parameter

  9. Gbest-guided artificial bee colony algorithm for numerical function optimization

    Microsoft Academic Search

    Guopu Zhu; Sam Kwong

    2010-01-01

    Artificial bee colony (ABC) algorithm invented recently by Karaboga is a biological-inspired optimization algorithm, which has been shown to be competitive with some conventional biological-inspired algorithms, such as genetic algorithm (GA), differential evolution (DE) and particle swarm optimization (PSO). However, there is still an insufficiency in ABC algorithm regarding its solution search equation, which is good at exploration but poor

  10. Optimal Design of Geodetic Network Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Vajedian, Sanaz; Bagheri, Hosein

    2010-05-01

    A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied to obtain the solution of problem. In this research, the first step is to design a geodetic network and do the observations of the distances and angles between network's stations. The second step is to use the optimization algorithms to estimate unknown values of stations' coordinates, with regards to calculation equations of length and angle. The result indicates that The Genetic algorithms have been successfully employed for solving inverse problems in engineering disciplines. And it seems that many complex problems can be better solved using genetic algorithms than those of using conventional methods.

  11. Fast Approximate Convex Decomposition 

    E-print Network

    Ghosh, Mukulika

    2012-10-19

    Approximate convex decomposition (ACD) is a technique that partitions an input object into "approximately convex" components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can...

  12. Propeller performance analysis and multidisciplinary optimization using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Burger, Christoph

    A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.

  13. A TRUST REGION INTERIOR POINT ALGORITHM FOR LINEARLY CONSTRAINED OPTIMIZATION

    Microsoft Academic Search

    J. FR ED ERIC; CECILIA POLAz

    We present an extension, for nonlinear optimization under linear constraints, of an algorithm for quadratic programming using a trust region idea introduced by Ye and Tse (Math. Programming, 44 (1989), pp. 157{179) and extended by Bonnans and Bouhtou (RAIRO Rech. Op er., 29 (1995), pp. 195{217). Due to the nonlinearity of the cost, we use a linesearch in order to

  14. An optimal bit allocation algorithm for sub-band coding

    Microsoft Academic Search

    P. H. Westerink; J. Biemond; D. E. Boekee

    1988-01-01

    An optimal bit allocation algorithm is presented that is suitable for all practical situations. Each source to be coded is assumed to have its own set of admissible quantizers (which can be either scalar or vector quantizers) which do not need to have integer bit rates. The distortion versus rate characteristic of each quantizer set may have an arbitrary shape.

  15. Allocating optimal index positions on tool magazines using genetic algorithms

    Microsoft Academic Search

    Türkay Dereli; I. Hüseyin Filiz

    2000-01-01

    This paper presents an optimisation system software developed for the determination of optimal index positions of cutting tools on the automatic tool changer (ATC) or turret magazine of CNC machine tools. Position selection is performed using a genetic algorithm (GA) which takes a list of cutting tools assigned to certain machining operations together with total number of index positions available

  16. Intelligent evolutionary algorithms for large parameter optimization problems

    Microsoft Academic Search

    Shinn-ying Ho; Li-sun Shu; Jian-hung Chen

    2004-01-01

    This work proposes two intelligent evolutionary algorithms IEA and IMOEA using a novel intelligent gene collector (IGC) to solve single and multiobjective large parameter optimization problems, respectively. IGC is the main phase in an intelligent recombination operator of IEA and IMOEA. Based on orthogonal experimental design, IGC uses a divide-and-conquer approach, which consists of adaptively dividing two individuals of parents

  17. Constrained Optimization with Genetic Algorithm: Improving Profitability of Targeted Marketing

    Microsoft Academic Search

    Geng Cui; Man Leung Wong; Xiang Wan

    2010-01-01

    Direct marketing forecasting models have focused on estimating the response probabilities of consumer purchases and neglected the profitability of customers. This study proposes a method of constrained optimization using genetic algorithm to maximize the profitability at the top deciles of a customer list. We apply this method to a direct marketing dataset using tenfold cross validation. The results from this

  18. GENETIC ALGORITHMS AND OPTIMIZING CHEMICAL OXYGEN-IODINE LASERS

    Microsoft Academic Search

    David L. Carroll

    1996-01-01

    This paper presents results from the first known application of the genetic algorithm (GA) technique for optimizing the performance of a laser system (chemical, solid-state, or gaseous). The effects of elitism, single point and uniform crossover, creep mutation, different random number seeds, population size, niching and the number of children per pair of parents on the performance of the GA

  19. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    EPA Science Inventory

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  20. A Niched Pareto Genetic Algorithm for Multiobjective Optimization

    Microsoft Academic Search

    Jeffrey Horn; Nicholas Nafpliotis; David E. Goldberg

    1994-01-01

    Many, if not most, optimization problems have multiple objectives. Historically, multiple objectives have been combined ad hoc to form a scalar objective function, usually through a linear combination (weighted sum) of the multiple attributes, or by turning objectives into constraints. The genetic algorithm (GA), however, is readily modified to deal with multiple objectives by incorporating the concept of Pareto domination

  1. Comparison between Genetic Algorithms and Particle Swarm Optimization

    Microsoft Academic Search

    Russell C. Eberhart; Yuhui Shi

    1998-01-01

    This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization. The operators of each paradigm are reviewed, focusing on how each affects search behavior in the problem space. The goals of the paper are to provide additional insights into how each paradigm works, and to suggest ways in which performance might be improved by incorporating features from

  2. Genetic Algorithm Optimization of Artificial Neural Networks for Hydrological Modelling

    Microsoft Academic Search

    R. J. Abrahart

    2004-01-01

    This paper will consider the case for genetic algorithm optimization in the development of an artificial neural network model. It will provide a methodological evaluation of reported investigations with respect to hydrological forecasting and prediction. The intention in such operations is to develop a superior modelling solution that will be: \\\\begin{itemize} more accurate in terms of output precision and model

  3. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  4. A Clustering Genetic Algorithm for Actuator Optimization in Flow Control

    Microsoft Academic Search

    Michele Milano; Petros Koumoutsakos

    2000-01-01

    Active flow control can provide a leap in the perform- nace of engineering configurations. Although a number of sensor and actuator configurations have been proposed the task of identifying optimal parameters for control devices is based on engineering intuition usually gathered from un- controlled flow experiments. Here we propose a clustering genetic algorithm that adaptively identifies critical points in the

  5. The particle swarm optimization algorithm: convergence analysis and parameter selection

    Microsoft Academic Search

    Ioan Cristian Trelea

    2003-01-01

    The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given.

  6. The Societal Impact of Algorithms in Transport Optimization1

    E-print Network

    Zaroliagis, Christos D.

    ). To achieve these vital socio-economic goals, public transportation systems have to improve the quality far, there were only theoretical arguments in favor of the time-dependent approach. In [Pyrga et alThe Societal Impact of Algorithms in Transport Optimization1 Christos D. Zaroliagis Dept

  7. Optimization flow control—I: basic algorithm and convergence

    Microsoft Academic Search

    Steven H. Low; David E. Lapsley

    1999-01-01

    We propose an optimization approach to o w control where the objective is to maximize the aggregate source utility over their transmission rates. We view net- work links and sources as processors of a distributed com- putation system to solve the dual problem using gradient projection algorithm. In this system sources select trans- mission rates that maximize their own benets,

  8. Elitist Genetic Algorithm Models: Optimization of High Performance Concrete Mixes

    Microsoft Academic Search

    M. A. Jayaram; M. C. Nataraja; C. N. Ravikumar

    2009-01-01

    This article elaborates the development of elitist Genetic Algorithm (GA) models for the optimization of high volume fly ash concrete (HVFAC) mix. The model consists of two stages. In the first stage, a huge database of 350 mix designs garnered through standard research publications were statstistically analyzed to elicit upper and lower bounds of certain range constraints and rational ratio

  9. Extended Semantics and Optimization Algorithms for CP-Networks

    E-print Network

    Dimopoulos, Yannis

    Extended Semantics and Optimization Algorithms for CP-Networks Ronen I. Brafman Dept. of Computer;cation tasks. CP-nets were designed to make the process of preference elicitation simpler and more intuitive for lay users by graphically structuring a set of Ceteris Paribus (CP) preference statements

  10. Optimal Sleep-Wakeup Algorithms for Barriers of Wireless Sensors

    E-print Network

    Sinha, Prasun

    Optimal Sleep-Wakeup Algorithms for Barriers of Wireless Sensors Santosh Kumar Ten H. Lai Marc E,posner.1,sinha.43}@osu.edu Abstract-- The problem of sleep wakeup has been extensively studied the sleep- wakeup problem is NP-Hard for this model, several heuristics ex- ist. For the model of barrier

  11. Algorithms for Optimal Price Regulations Alexander Grigoriev1

    E-print Network

    Al Hanbali, Ahmad

    Algorithms for Optimal Price Regulations Alexander Grigoriev1 , Joyce van Loon1 , and Marc Uetz2 1- ulative policy, and compare it to alternative implementations of price regulations. The problem is a three-level mathematical program: The EU determines the price regulative policy that maximizes overall social welfare

  12. Pseudo-Tree Based Hybrid Algorithm for Distributed Constraint Optimization

    E-print Network

    Yeoh, William

    Pseudo-Tree Based Hybrid Algorithm for Distributed Constraint Optimization Tenda Okimoto , Makoto-agent cooperation. Considering pseudo-tree based search algo- rithms is important in DCOPs, since their memory, how to speed up pseudo-tree based search al- gorithms is one of the major issues in DCOPs

  13. SNOPT: An SQP Algorithm For Large-Scale Constrained Optimization

    Microsoft Academic Search

    Philip E. Gill; Walter Murray; Michael A. Saunders

    1997-01-01

    . Sequential quadratic programming (SQP) methods have proved highly effective forsolving constrained optimization problems with smooth nonlinear functions in the objective andconstraints. Here we consider problems with general inequality constraints (linear and nonlinear).We assume that first derivatives are available, and that the constraint gradients are sparse.We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function andmakes explicit

  14. An optimal algorithm for closest pair maintenance Sergei N. Bespamyatnikh

    E-print Network

    Bespamyatnikh, Sergei

    .Bespamyatnikh@usu.ru. Abstract Given a set S of n points in k­dimensional space, and an L t metric, the dynamic closest pair­tuple of real numbers (p 1 ; : : : ; p k ). The closest pair of S is a pair (p; q) of distinct points p; q 2An optimal algorithm for closest pair maintenance Sergei N. Bespamyatnikh Department of Mathematics

  15. Inverse radiation analysis using repulsive particle swarm optimization algorithm

    Microsoft Academic Search

    Kyun Ho Lee; Seung Wook Baek; Ki Wan Kim

    2008-01-01

    In this study, an inverse radiation analysis is presented for the estimation of the radiation properties for an absorbing, emitting, and scattering media with diffusely emitting and reflecting opaque boundaries. The repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, is proposed as an effective method for improving the search efficiency for unknown radiative parameters.

  16. Using the Particle Swarm Optimization Algorithm for Robotic Search Applications

    Microsoft Academic Search

    J. M. Hereford; M. Siebold; S. Nichols

    2007-01-01

    This paper describes the experimental results of using the particle swarm optimization (PSO) algorithm to control a suite of robots. In our approach, each bot is one particle in the PSO; each particle\\/bot makes measurements, updates its own position and velocity, updates its own personal best measurement (pbest) and personal best location (if necessary), and broadcasts to the other bots

  17. A particle swarm optimization algorithm for part–machine grouping

    Microsoft Academic Search

    Carlos Andrés; Sebastián Lozano

    2006-01-01

    Although in the last years different metaheuristic methods have been used to solve the cell formation problem in group technology, this paper presents the first particle swarm optimization (PSO) algorithm designed to address this problem. PSO is a population-based evolutionary computation technique based on a social behavior metaphor. The criterion used to group the machines in cells is based on

  18. Convergence behavior of the fully informed particle swarm optimization algorithm

    Microsoft Academic Search

    Marco Antonio Montes De Oca; Thomas Stützle

    2008-01-01

    The fully informed particle swarm optimization algorithm (FIPS) is very sensitive to changes in the population topology. The velocity update rule used in FIPS considers all the neighbors of a particle to update its velocity instead of just the best one as it is done in most variants. It has been argued that this rule induces a random behavior of

  19. INTERIOR-POINT ALGORITHMS FOR CONVEX OPTIMIZATION BASED ON PRIMAL-DUAL METRICS

    E-print Network

    Tunçel, Levent

    Doctoral Scholarship, ONR Research Grant N00014-12-10049, and a Dis- covery Grant from NSERC. Levent Tun¸cel, and in applications. Part of the success of primal-dual symmetric methods for LP and SDP might stem from the fact

  20. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  1. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-05-16

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  2. The optimization algorithm based knot and control point automatic adjustment

    NASA Astrophysics Data System (ADS)

    Jia, Xingyue; Zhao, Xiuyang

    2015-03-01

    Aiming at the issue of point cloud or mesh model, which can be approximated using cubic B-spline surfaces, an algorithm of optimizing the knot vector based on Gaussian Mixture Model(GMM) is proposed in this paper. In addition, the control points of sub-corner points are searched by the Particle Swarm Optimization (PSO) in the process of stitching two B-spline surfaces with different knot vectors. Compared with conventional B-spline surface skinning, the proposed algorithms have two advantages. First, the global optimum is easy to be found by statistically learning and sampling in accordance with the probability distribution of the best individuals. Second, the stitching surface obtained is much smoother and the precise of approximate surface is also higher. The effectiveness of the proposed algorithm have been demonstrated according to experimental examples.

  3. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature. PMID:20850737

  4. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.

  5. A scatter learning particle swarm optimization algorithm for multimodal problems.

    PubMed

    Ren, Zhigang; Zhang, Aimin; Wen, Changyun; Feng, Zuren

    2014-07-01

    Particle swarm optimization (PSO) has been proved to be an effective tool for function optimization. Its performance depends heavily on the characteristics of the employed exemplars. This necessitates considering both the fitness and the distribution of exemplars in designing PSO algorithms. Following this idea, we propose a novel PSO variant, called scatter learning PSO algorithm (SLPSOA) for multimodal problems. SLPSOA contains some new algorithmic features while following the basic framework of PSO. It constructs an exemplar pool (EP) that is composed of a certain number of relatively high-quality solutions scattered in the solution space, and requires particles to select their exemplars from EP using the roulette wheel rule. By this means, more promising solution regions can be found. In addition, SLPSOA employs Solis and Wets' algorithm as a local searcher to enhance its fine search ability in the newfound solution regions. To verify the efficiency of the proposed algorithm, we test it on a set of 16 benchmark functions and compare it with six existing typical PSO algorithms. Computational results demonstrate that SLPSOA can prevent premature convergence and produce competitive solutions. PMID:24108491

  6. Global structual optimizations of surface systems with a genetic algorithm

    SciTech Connect

    Chuang, Feng-Chuan

    2005-05-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al{sub n} (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of {radical}3 x {radical}3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  7. Joint optimization of algorithmic suites for EEG analysis.

    PubMed

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621

  8. Algorithmica, vol. 8 (1992), pp. 345364. Constructing Strongly Convex Hulls Using Exact or Rounded

    E-print Network

    Milenkovic, Victor

    1992-01-01

    rounded arithmetic with rounding unit ¯. This is the first rounded arithmetic convex hull algorithm which], convex hulls and triangulations of planar point­sets [1], and Voronoi diagrams [8]. As one would expect arrangements, convex hulls, triangulations, or Voronoi diagrams. For example, Fortune's convex hull algorithm

  9. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm

    Microsoft Academic Search

    Dervis Karaboga; Bahriye Basturk

    2007-01-01

    Swarm intelligence is a research branch that models the population of interacting agents or swarms that are able to self-organize.\\u000a An ant colony, a flock of birds or an immune system is a typical example of a swarm system. Bees’ swarming around their hive\\u000a is another example of swarm intelligence. Artificial Bee Colony (ABC) Algorithm is an optimization algorithm based

  10. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  11. A data quantity optimization algorithm in terrain visualization

    NASA Astrophysics Data System (ADS)

    Li, Yabin; Gong, Jianhua

    2008-10-01

    In order to ascertain appropriate data quantity in terrain visualization for some certain computers, the relationship model of FPS and data quantity has been put forward in this paper. Regarding the model, the time of a whole terrain visualization cycle is divided into two parts: data unrelated time and data related time. Based on the relationship model, a data optimization algorithm is developed, and the influences of timer error and data reading error are considered in terms of the algorithm. The algorithm is tested in a terrain visualization system developed with C++, FLTK and OpenGL. The results of experiment shows that the algorithm can evaluate and quantify computer's visualization performance, and calculate the precise triangle amount quickly, thus the rendering rate of terrain visualization system can be controlled accurately.

  12. A Multi-Objective Ant Colony Optimization Algorithm for Infrastructure Routing 

    E-print Network

    McDonald, Walter

    2012-07-16

    An algorithm is presented that is capable of producing Pareto-optimal solutions for multi-objective infrastructure routing problems: the Multi-Objective Ant Colony Optimization (MOACO). This algorithm offers a constructive search technique...

  13. Computational experiments for local search algorithms for binary and mixed integer optimization

    E-print Network

    Zhou, Jingting, S.M. Massachusetts Institute of Technology

    2010-01-01

    In this thesis, we implement and test two algorithms for binary optimization and mixed integer optimization, respectively. We fine tune the parameters of these two algorithms and achieve satisfactory performance. We also ...

  14. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  15. Optimization of an antenna array using genetic algorithms

    SciTech Connect

    Kiehbadroudinezhad, Shahideh; Noordin, Nor Kamariah; Sali, A. [Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor (Malaysia); Abidin, Zamri Zainal, E-mail: gs34499@mutiara.upm.edu.my [Department of Physics, Science Faculty, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2014-06-01

    An array of antennas is usually used in long distance communication. The observation of celestial objects necessitates a large array of antennas, such as the Giant Metrewave Radio Telescope (GMRT). Optimizing this kind of array is very important when observing a high performance system. The genetic algorithm (GA) is an optimization solution for these kinds of problems that reconfigures the position of antennas to increase the u-v coverage plane or decrease the sidelobe levels (SLLs). This paper presents how to optimize a correlator antenna array using the GA. A brief explanation about the GA and operators used in this paper (mutation and crossover) is provided. Then, the results of optimization are discussed. The results show that the GA provides efficient and optimum solutions among a pool of candidate solutions in order to achieve the desired array performance for the purposes of radio astronomy. The proposed algorithm is able to distribute the u-v plane more efficiently than GMRT with a more than 95% distribution ratio at snapshot, and to fill the u-v plane from a 20% to more than 68% filling ratio as the number of generations increases in the hour tracking observations. Finally, the algorithm is able to reduce the SLL to –21.75 dB.

  16. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  17. Incremental Sampling-based Algorithms for Optimal Motion Planning

    E-print Network

    Karaman, Sertac

    2010-01-01

    During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs) have been shown to work well in practice and to possess theoretical guarantees such as probabilistic completeness. However, no theoretical bounds on the quality of the solution obtained by these algorithms have been established so far. The first contribution of this paper is a negative result: it is proven that, under mild technical conditions, the cost of the best path in the RRT converges almost surely to a non-optimal value. Second, a new algorithm is considered, called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost of the best path in the RRG converges to the optimum almost surely. Third, a tree version of RRG is introduced, called the RRT$^*$ algorithm, which preserves the asymptotic optimality of RRG while maintaining a tree structure like RRT. The analysis of the new algorithms hinges on novel connections between sampling-based motion planning algorit...

  18. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose. PMID:21358752

  19. Nonconvex compressed sensing by nature-inspired optimization algorithms.

    PubMed

    Liu, Fang; Lin, Leping; Jiao, Licheng; Li, Lingling; Yang, Shuyuan; Hou, Biao; Ma, Hongmei; Yang, Li; Xu, Jinghuan

    2015-05-01

    The l 0 regularized problem in compressed sensing reconstruction is nonconvex with NP-hard computational complexity. Methods available for such problems fall into one of two types: greedy pursuit methods and thresholding methods, which are characterized by suboptimal fast search strategies. Nature-inspired algorithms for combinatorial optimization are famous for their efficient global search strategies and superior performance for nonconvex and nonlinear problems. In this paper, we study and propose nonconvex compressed sensing for natural images by nature-inspired optimization algorithms. We get measurements by the block-based compressed sampling and introduce an overcomplete dictionary of Ridgelet for image blocks. An atom of this dictionary is identified by the parameters of direction, scale and shift. Of them, direction parameter is important for adapting to directional regularity. So we propose a two-stage reconstruction scheme (TS_RS) of nature-inspired optimization algorithms. In the first reconstruction stage, we design a genetic algorithm for a class of image blocks to acquire the estimation of atomic combinations in all directions; and in the second reconstruction stage, we adopt clonal selection algorithm to search better atomic combinations in the sub-dictionary resulted by the first stage for each image block further on scale and shift parameters. In TS_RS, to reduce the uncertainty and instability of the reconstruction problems, we adopt novel and flexible heuristic searching strategies, which include delicately designing the initialization, operators, evaluating methods, and so on. The experimental results show the efficiency and stability of the proposed TS_RS of nature-inspired algorithms, which outperforms classic greedy and thresholding methods. PMID:25148677

  20. A Probabilistic Analysis of a Simplified Biogeography-Based Optimization Algorithm

    E-print Network

    Simon, Dan

    A Probabilistic Analysis of a Simplified Biogeography-Based Optimization Algorithm Dan Simon d Abstract Biogeography-based optimization (BBO) is a population-based evolutionary algorithm (EA that the expected amount of improvement decreases. Keywords Biogeography-based optimization, evolutionary algorithms

  1. Evaluation of global optimization algorithms for parameter calibration of a computationally intensive hydrologic model

    Microsoft Academic Search

    Xuesong Zhang; Raghavan Srinivasan; Kaiguang Zhao; Mike Van Liew

    2009-01-01

    With the popularity of complex hydrologic models, the time taken to run these models is increasing substantially. Comparing and evaluating the efficacy of different optimization algorithms for calibrating computationally intensive hydrologic models is becoming a nontrivial issue. In this study, five global optimization algorithms (genetic algorithms, shuffled complex evolution, particle swarm optimization, differential evolution, and artificial immune system) were tested

  2. A Fast Particle Swarm Optimization Algorithm with Cauchy Mutation and Natural Selection Strategy

    Microsoft Academic Search

    Changhe Li; Yong Liu; Aimin Zhou; Lishan Kang; Hui Wang

    2007-01-01

    The standard Particle Swarm Optimization (PSO) algorithm is a novel evolutionary algorithm in which each particle studies its own previous best solution and the group's previous best to optimize problems. One problem exists in PSO is its tendency of trapping into local optima. In this paper, a fast particle swarm optimization (FPSO) algorithm is proposed by combining PSO and the

  3. Fast source optimization by clustering algorithm based on lithography properties

    NASA Astrophysics Data System (ADS)

    Tawada, Masashi; Hashimoto, Takaki; Sakanushi, Keishi; Nojima, Shigeki; Kotani, Toshiya; Yanagisawa, Masao; Togawa, Nozomu

    2015-03-01

    Lithography is a technology to make circuit patterns on a wafer. UV light diffracted by a photomask forms optical images on a photoresist. Then, a photoresist is melt by an amount of exposed UV light exceeding the threshold. The UV light diffracted by a photomask through lens exposes the photoresist on the wafer. Its lightness and darkness generate patterns on the photoresist. As the technology node advances, the feature sizes on photoresist becomes much smaller. Diffracted UV light is dispersed on the wafer, and then exposing photoresists has become more difficult. Exposure source optimization, SO in short, techniques for optimizing illumination shape have been studied. Although exposure source has hundreds of grid-points, all of previous works deal with them one by one. Then they consume too much running time and that increases design time extremely. How to reduce the parameters to be optimized in SO is the key to decrease source optimization time. In this paper, we propose a variation-resilient and high-speed cluster-based exposure source optimization algorithm. We focus on image log slope (ILS) and use it for generating clusters. When an optical image formed by a source shape has a small ILS value at an EPE (Edge placement error) evaluation point, dose/focus variation much affects the EPE values. When an optical image formed by a source shape has a large ILS value at an evaluation point, dose/focus variation less affects the EPE value. In our algorithm, we cluster several grid-points with similar ILS values and reduce the number of parameters to be simultaneously optimized in SO. Our clustering algorithm is composed of two STEPs: In STEP 1, we cluster grid-points into four groups based on ILS values of grid-points at each evaluation point. In STEP 2, we generate super clusters from the clusters generated in STEP 1. We consider a set of grid-points in each cluster to be a single light source element. As a result, we can optimize the SO problem very fast. Experimental results demonstrate that our algorithm runs speed-up compared to a conventional algorithm with keeping the EPE values.

  4. Size optimization of space trusses using Big Bang–Big Crunch algorithm

    Microsoft Academic Search

    A. Kaveh; S. Talatahari

    2009-01-01

    A Hybrid Big Bang–Big Crunch (HBB–BC) optimization algorithm is employed for optimal design of truss structures. HBB–BC is compared to Big Bang–Big Crunch (BB–BC) method and other optimization methods including Genetic Algorithm, Ant Colony Optimization, Particle Swarm Optimization and Harmony Search. Numerical results demonstrate the efficiency and robustness of the HBB–BC method compared to other heuristic algorithms.

  5. A Honey-bee Mating Optimization Algorithm for Educational Timetabling Problems

    E-print Network

    Qu, Rong

    1 A Honey-bee Mating Optimization Algorithm for Educational Timetabling Problems Nasser R. Sabar1 of the Honey-bee Mating Optimization Algorithm for solv- ing educational timetabling problems. The honey-bee algorithm is a nature inspired algorithm which sim- ulates the process of real honey-bees mating

  6. Improvements to a Newton-Krylov Adjoint Algorithm for Aerodynamic Optimization

    E-print Network

    Zingg, David W.

    Improvements to a Newton-Krylov Adjoint Algorithm for Aerodynamic Optimization David W. Zingg-based algorithm for aerodynamic optimization. A Newton-Krylov algorithm is used to solve the compressible Navier of the improvements on the performance of the algorithm is presented. I. Introduction Numerical aerodynamic shape

  7. Optimization of experimental design in fMRI: a general framework using a genetic algorithm

    E-print Network

    Optimization of experimental design in fMRI: a general framework using a genetic algorithm Tor D uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore

  8. Genetic algorithms used for the optimization of light-emitting diodes and solar thermal collectors

    E-print Network

    Mayer, Alexandre

    Genetic algorithms used for the optimization of light-emitting diodes and solar thermal collectors of Namur, Rempart de la Vierge 8, 5000 Namur, Belgium ABSTRACT We present a genetic algorithm (GA) we algorithms for addressing complex problems in physics. Keywords: genetic algorithm, optimization, light

  9. Application of Genetic Algorithm in the Optimization of Water Pollution Control Scheme

    Microsoft Academic Search

    Rui-Ming Zhao; Dong-Ping Qian

    2007-01-01

    Genetic Algorithm (Genetic Algorithm Chine write for the GA) is a kind of hunting Algorithm bionic global optimization imitating the Darwinian biological evolution theories, is advancing front of complex nonlinear science and artificial intelligence science. In the basic of introducing the GA basic principle and optimization Algorithm, this text leads the GA into the domain of the water pollution control

  10. Approximating convex Pareto surfaces in multiobjective radiotherapy planning

    SciTech Connect

    Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2006-09-15

    Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing.

  11. Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework

    SciTech Connect

    Alicia Hofler, Pavel Evtushenko, Frank Marhauser

    2009-09-01

    Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.

  12. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  13. Genetic algorithm application in optimization of wireless sensor networks.

    PubMed

    Norouzi, Ali; Zaim, A Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  14. Operational Optimal Ship Routing Using a Hybrid Parallel Genetic Algorithm

    E-print Network

    O. T. Kosmas; D. S. Vlachos

    2009-05-04

    Optimization of ship routing depends on several parameters, like ship and cargo characteristics, environmental factors, topography, international navigation rules, crew comfort etc. The complex nature of the problem leads to oversimplifications in analytical techniques, while stochastic methods like simulated annealing can be both time consuming and sensitive to local minima. In this work, a hybrid parallel genetic algorithm - estimation of distribution algorithm is developed in the island model, to operationally calculate the optimal ship routing. The technique, which is applicable not only to clusters but to grids as well, is very fast and has been applied to very difficult environments, like the Greek seas with thousands of islands and extreme micro-climate conditions.

  15. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Weerapurage, Dinesh P [ORNL; Sullivan, Blair D [ORNL; Groer, Christopher S [ORNL

    2013-01-01

    Although many NP-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of required dynamic programming tables and excessive running times of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree-decomposition based approach to solve maximum weighted independent set. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  16. Sampling-based Algorithms for Optimal Motion Planning

    E-print Network

    Karaman, Sertac

    2011-01-01

    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically opti...

  17. The Convex Hull of Rational Plane Curves

    Microsoft Academic Search

    Gershon Elber; Myung-soo Kim; Hee-seok Heo

    2001-01-01

    We present an algorithm that computes the convex hull of multiple rational curves in the plane. The problem is reformulated as one of finding the zero-sets of polynomial equations in one or two variables; using these zero-sets we characterize curve segments that belong to the boundary of the convex hull. We also present a preprocessing step that can eliminate many

  18. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  19. An Any-space Algorithm for Distributed Constraint Optimization

    Microsoft Academic Search

    Anton Chechetka; Katia Sycara

    The Distributed Constraint Optimization Problem (DCOP) is a powerful formalism for multiagent co- ordination problems, including planning and schedul- ing. This paper presents modifications to a polynomial-space branch-and-bound based algorithm, called NCBB, for solving DCOP, that make the algo- rithm any-space. This enables a continuous tradeoff between O(bp) space, O(bpH+1) time complexity and O(pw + bp) space and O(bHpw+1) time,

  20. Quantum computing-based Ant Colony Optimization algorithm for TSP

    Microsoft Academic Search

    Xiaoming You; Xingwai Miao; Sheng Liu

    2009-01-01

    A novel self-adaptive Ant Colony Optimization algorithm based on Quantum mechanism for Traveling salesman problem(TQACO) is proposed. Firstly, initializing the population of the ant colony with superposition of Q-bit, Secondly, using self-adaptive operator, namely in prophase we use higher probability to explore more search space and to collect useful global information; otherwise in anaphase we use higher probability to accelerate

  1. Genetic Algorithms for Municipal Solid Waste Collection and Routing Optimization

    Microsoft Academic Search

    Nikolaos V. Karadimas; Katerina Papatzelou; Vassili G. Loumos

    2007-01-01

    In the present paper, the Genetic Algorithm (GA) is used for the identification of optimal routes in the case of Municipal\\u000a Solid Waste (MSW) collection. The identification of a route for MSW collection trucks is critical since it has been estimated\\u000a that, of the total amount of money spent for the collection, transportation, and disposal of solid waste, approximately 60–80%

  2. A Simulated Annealing Based Multi-objective Optimization Algorithm: AMOSA

    Microsoft Academic Search

    Sanghamitra Bandyopadhyay; Sriparna Saha; Ujjwal Maulik; Kalyanmoy Deb

    ó This article describes a simulated annealing based multi-objective optimization algorithm that incorporates the concept of archive in order to provide a set of trade-off solutions of the problem under consideration. To determine the acceptance probability of a new solution vis-a-vis the current solution, an elaborate procedure is followed that takes into account the domination status of the new solution

  3. An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel

    NASA Astrophysics Data System (ADS)

    Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.

    2015-01-01

    This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.

  4. Designing stable extrapolators for explicit depth extrapolation of 2D and 3D wavefields using projections onto convex sets

    E-print Network

    van der Baan, Mirko

    PSPI and split-step Fourier. The MPOCS algorithm pro- vides practically stable depth extrapolators the projections-on- to-convex-sets POCS method. The operators are optimal in the sense that they satisfy all-space -x explicit depth extrapolation method is one of the most attractive techniques for performing

  5. A Dynamic Near-Optimal Algorithm for Online Linear Programming

    E-print Network

    Agrawal, Shipra; Ye, Yinyu

    2009-01-01

    We consider the online linear programming problem where the constraint matrix is revealed column by column along with the objective function. We provide a 1-o(1) competitive algorithm for this surprisingly general class of online problems under the assumption of random order of arrival and some mild conditions on the right-hand-side input. Our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, the price learned from the previous steps is used to determine the decision for the current step. Our result provides a common near-optimal solution to a wide range of online problems including online routing and packing, online combinatorial auction, online adwords matching, many secretary problems, and various resource allocation and revenue management problems. Apart from online problems, the algorithm can also be applied for fast solution of large linear programs by sampling the columns of constraint matrix.

  6. Optimization of a statistical algorithm for objective comparison of toolmarks.

    PubMed

    Spotts, Ryan; Chumbley, L Scott; Ekstrand, Laura; Zhang, Song; Kreiser, James

    2015-03-01

    Due to historical legal challenges, there is a driving force for the development of objective methods of forensic toolmark identification. This study utilizes an algorithm to separate matching and nonmatching shear cut toolmarks created using fifty sequentially manufactured pliers. Unlike previously analyzed striated screwdriver marks, shear cut marks contain discontinuous groups of striations, posing a more difficult test of algorithm applicability. The algorithm compares correlation between optical 3D toolmark topography data, producing a Wilcoxon rank sum test statistic. Relative magnitude of this metric separates the matching and nonmatching toolmarks. Results show a high degree of statistical separation between matching and nonmatching distributions. Further separation is achieved with optimized input parameters and implementation of a "leash" preventing a previous source of outliers--however complete statistical separation was not achieved. This paper represents further development of objective methods of toolmark identification and further validation of the assumption that toolmarks are identifiably unique. PMID:25425426

  7. Optimizing remediation of an unconfined aquifer using a hybrid algorithm.

    PubMed

    Hsiao, Chin-Tsai; Chang, Liang-Cheng

    2005-01-01

    We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation. PMID:16324011

  8. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    NASA Astrophysics Data System (ADS)

    Tomizawa, Hiromitsu

    2011-10-01

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  9. Convex Drawings of Hierarchical Plane Graphs Seok-Hee Hong

    E-print Network

    Hong,Seokhee

    Convex Drawings of Hierarchical Plane Graphs Seok-Hee Hong School of Information Technologies drawn as a convex polygon admits a convex drawing. We present an algorithm which constructs such a drawing. This extends the previous known result that every hierarchical plane graph admits a straight

  10. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Sve?ko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  11. A heterogeneous algorithm for PDT dose optimization for prostate

    NASA Astrophysics Data System (ADS)

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S. B.; Hahn, Stephen M.

    2009-02-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT.

  12. Coil optimization for electromagnetic levitation using a genetic like algorithm

    NASA Astrophysics Data System (ADS)

    Royer, Z. L.; Tackes, C.; LeSar, R.; Napolitano, R. E.

    2013-06-01

    The technique of electromagnetic levitation (EML) provides a means for thermally processing an electrically conductive specimen in a containerless manner. For the investigation of metallic liquids and related melting or freezing transformations, the elimination of substrate-induced nucleation affords access to much higher undercooling than otherwise attainable. With heating and levitation both arising from the currents induced by the coil, the performance of any EML system depends on controlling the balance between lifting forces and heating effects, as influenced by the levitation coil geometry. In this work, a genetic algorithm is developed and utilized to optimize the design of electromagnetic levitation coils. The optimization is targeted specifically to reduce the steady-state temperature of the stably levitated metallic specimen. Reductions in temperature of nominally 70 K relative to that obtained with the initial design are achieved through coil optimization, and the results are compared with experiments for aluminum. Additionally, the optimization method is shown to be robust, generating a small range of converged results from a variety of initial starting conditions. While our optimization criterion was set to achieve the lowest possible sample temperature, the method is general and can be used to optimize for other criteria as well.

  13. Broadband omnidirectional antireflection coatings optimized by genetic algorithm.

    PubMed

    Poxson, David J; Schubert, Martin F; Mont, Frank W; Schubert, E F; Kim, Jong Kyu

    2009-03-15

    An optimized graded-refractive-index (GRIN) antireflection (AR) coating with broadband and omnidirectional characteristics--as desired for solar cell applications--designed by a genetic algorithm is presented. The optimized three-layer GRIN AR coating consists of a dense TiO2 and two nanoporous SiO2 layers fabricated using oblique-angle deposition. The normal incidence reflectance of the three-layer GRIN AR coating averaged between 400 and 700 nm is 3.9%, which is 37% lower than that of a conventional single-layer Si3N4 coating. Furthermore, measured reflection over the 410-740 nm range and wide incident angles 40 degrees -80 degrees is reduced by 73% in comparison with the single-layer Si3N4 coating, clearly showing enhanced omnidirectionality and broadband characteristics of the optimized three-layer GRIN AR coating. PMID:19282913

  14. The Breeder Genetic Algorithm and its application to optimization problems

    SciTech Connect

    Muehlenbein, H.

    1994-12-31

    The Breeder Genetic Algorithm BGA models artificial selection as performed by human breeders. The science of breeding is based on advanced statistical methods. The well known response to selection equation and the concept of heritability have been used to predict the behavior of the BGA. Selection, recombination and mutation have been analyzed within this framework. The theoretical results have been obtained under the assumption of additive gene effects. For general fitness landscapes regression techniques for estimating the heritability are used to analyze and control the BGA. The BGA has been applied to a number of optimization problems, ranging from the optimization of multimodal continuous functions to the breeding of neural networks. In the talk we will describe the first application in some detail and summarize some combinatorial applications e.g. transport optimization, job shop scheduling, autocorrelation.

  15. Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers

    E-print Network

    Rogers, Adam

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automa...

  16. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  17. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods. PMID:23193454

  18. Quantum-based algorithm for optimizing artificial neural networks.

    PubMed

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms. PMID:24808566

  19. A New Constrained Multiobjective Optimization Algorithm Based on Artificial Immune Systems

    Microsoft Academic Search

    Hansong Xiao; Jean W. Zu

    2007-01-01

    This paper proposes a new constrained multiobjective optimization algorithm based on artificial immune systems (AIS). To deal with constrained multiobjective optimization problems, the constrained AlS-based multiobjective optimization algorithm is developed by integrating a proposed constraint-handling technique with the unconstrained AIS-based multiobjective optimization algorithm named MOAIS (Xiao and Zu, 2006). We propose the constraint-handling technique by extending a single-objective constraint-handling technique

  20. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P. [Univ. of Maryland] [Univ. of Maryland; Tits, Andre [Univ. of Maryland] [Univ. of Maryland

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.