Science.gov

Sample records for convex optimization algorithms

  1. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  2. A Cutting Surface Algorithm for Semi-Infinite Convex Programming with an Application to Moment Robust Optimization

    SciTech Connect

    Mehrotra, Sanjay; Papp, Dávid

    2014-01-01

    We present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems and use it to develop a novel algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. Moments of arbitrary order, as well as nonpolynomial moments, can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum and stochastic programming at the other. Although our primary motivation is to solve distributionally robust optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is applicable to problems with nondifferentiable semi-infinite constraints indexed by an infinite dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of our algorithm even in the solution of traditional semi-infinite convex programming problems, whose constraints are differentiable, and are indexed by an index set of low dimension. After the rate of convergence analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the central cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.

  3. A Cutting Surface Algorithm for Semi-Infinite Convex Programming with an Application to Moment Robust Optimization

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2014-01-01

    We present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems and use it to develop a novel algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. Moments of arbitrary order, as well as nonpolynomial moments, can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum and stochastic programming at the other. Although our primary motivation is to solve distributionally robustmore » optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is applicable to problems with nondifferentiable semi-infinite constraints indexed by an infinite dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of our algorithm even in the solution of traditional semi-infinite convex programming problems, whose constraints are differentiable, and are indexed by an index set of low dimension. After the rate of convergence analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the central cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.« less

  4. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  5. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  6. Sparse recovery via convex optimization

    NASA Astrophysics Data System (ADS)

    Randall, Paige Alicia

    program which can be written as either a linear program or a second-order cone program, and the well-established machinery of convex optimization used to solve it rapidly.

  7. Advances in dual algorithms and convex approximation methods

    NASA Technical Reports Server (NTRS)

    Smaoui, H.; Fleury, C.; Schmit, L. A.

    1988-01-01

    A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.

  8. Some Randomized Algorithms for Convex Quadratic Programming

    SciTech Connect

    Goldbach, R.

    1999-01-15

    We adapt some randomized algorithms of Clarkson [3] for linear programming to the framework of so-called LP-type problems, which was introduced by Sharir and Welzl [10]. This framework is quite general and allows a unified and elegant presentation and analysis. We also show that LP-type problems include minimization of a convex quadratic function subject to convex quadratic constraints as a special case, for which the algorithms can be implemented efficiently, if only linear constraints are present. We show that the expected running times depend only linearly on the number of constraints, and illustrate this by some numerical results. Even though the framework of LP-type problems may appear rather abstract at first, application of the methods considered in this paper to a given problem of that type is easy and efficient. Moreover, our proofs are in fact rather simple, since many technical details of more explicit problem representations are handled in a uniform manner by our approach. In particular, we do not assume boundedness of the feasible set as required in related methods.

  9. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  10. First-order convex feasibility algorithms for x-ray CT

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Jorgensen, Jakob S.

    2013-03-15

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution-thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144 Degree-Sign . The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application.

  11. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  12. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  13. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.

  14. Convexity.

    ERIC Educational Resources Information Center

    Berger, Marcel

    1990-01-01

    Discussed are the idea, examples, problems, and applications of convexity. Topics include historical examples, definitions, the John-Loewner ellipsoid, convex functions, polytopes, the algebraic operation of duality and addition, and topology of convex bodies. (KR)

  15. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  16. Statistical Mechanics of Optimal Convex Inference in High Dimensions

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Ganguli, Surya

    2016-07-01

    A fundamental problem in modern high-dimensional data analysis involves efficiently inferring a set of P unknown model parameters governing the relationship between the inputs and outputs of N noisy measurements. Various methods have been proposed to regress the outputs against the inputs to recover the P parameters. What are fundamental limits on the accuracy of regression, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we optimally combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density α =(N /P )→∞ . However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite α . We employ replica theory to answer these questions for a class of inference algorithms, known in the statistics literature as M-estimators. These algorithms attempt to recover the P model parameters by solving an optimization problem involving minimizing the sum of a loss function that penalizes deviations between the data and model predictions, and a regularizer that leverages prior information about model parameters. Widely cherished algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference arise as special cases of M-estimators. Our analysis uncovers fundamental limits on the inference accuracy of a subclass of M-estimators corresponding to computationally tractable convex optimization problems. These limits generalize classical statistical theorems like the Cramer-Rao bound to the high-dimensional setting with prior information. We further discover the optimal M-estimator for log-concave signal and noise distributions; we demonstrate that it can achieve our high-dimensional limits on inference accuracy, while ML and MAP cannot. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than

  17. Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.

    PubMed

    Al-Mulhem, M; Al-Maghrabi, T

    1998-01-01

    This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.

  18. Revisiting the method of characteristics via a convex hull algorithm

    NASA Astrophysics Data System (ADS)

    LeFloch, Philippe G.; Mercier, Jean-Marc

    2015-10-01

    We revisit the method of characteristics for shock wave solutions to nonlinear hyperbolic problems and we propose a novel numerical algorithm-the convex hull algorithm (CHA)-which allows us to compute both entropy dissipative solutions (satisfying all entropy inequalities) and entropy conservative (or multi-valued) solutions. From the multi-valued solutions determined by the method of characteristics, our algorithm "extracts" the entropy dissipative solutions, even after the formation of shocks. It applies to both convex and non-convex flux/Hamiltonians. We demonstrate the relevance of the proposed method with a variety of numerical tests, including conservation laws in one or two spatial dimensions and problem arising in fluid dynamics.

  19. An Algorithm to Find the Intersection of Two Convex Polygons

    DTIC Science & Technology

    1993-09-01

    I NSWCDD/TR-93/345/ I AD-A274 722I I I~IIIIIIII IIlllllil llllllllllllili I AN ALGORITHM TO FIND THE INTERSECTION OF TWO CONVEX POLYGONSI I BY ARMIDO...CENTER DAHLGREN DIVISIONmIN A Dahlgren. Virginia 22448-5000 I* • ( 94-01450 I �I 12 0 43 ~~~l NSWCDD/TR-93/345 AN ALGORITHM TO FIND THE...Division (LIO) of the Strike Systems Department. A description of the analysis and software developed to find the intersection of two convex polygons is

  20. Multiband RF pulses with improved performance via convex optimization.

    PubMed

    Shang, Hong; Larson, Peder E Z; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W; Ohliger, Michael A; Pauly, John M; Lustig, Michael; Vigneron, Daniel B

    2016-01-01

    Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) (13)C MRI, a dualband saturation RF pulse for (1)H MR spectroscopy, and a pre-saturation pulse for HP (13)C study were developed and tested.

  1. Algorithm for detecting human faces based on convex-hull.

    PubMed

    Park, Minsick; Park, Chang-Woo; Park, Mignon; Lee, Chang-Hoon

    2002-03-25

    In this paper, we proposed a new method to detect faces in color based on the convex-hull. We detect two kinds of regions that are skin and hair likeness region. After preprocessing, we apply the convex-hull to their regions and can find a face from their intersection relationship. The proposed algorithm can accomplish face detection in an image involving rotated and turned faces as well as several faces. To validity the effectiveness of the proposed method, we make experiment with various cases.

  2. {epsilon}-optimality conditions for weakly convex problems

    SciTech Connect

    Pappalardo, M.

    1994-12-31

    There are several generalizations concerning the concept of convexity both for sets and for functions. Weak convexity, among these, has showed many possibilities of applications and many theoretical properties. It has, in fact, been applied in several fields of mathematics: see for example geometry and optimization. We want to analyze this generalization of the concept of convexity via the image-space approach. This kind of approach has showed its utility in many fields of optimization. In particular, we introduce a new concept of {open_quotes}image{close_quotes} based on a suitable relaxation or reduction (lower and upper) of the image itself. Moreover we analyze the main properties of this concept and we show how to utilize it in the study of weakly convex constrained extremum problems in order to obtain {epsilon}-optimality conditions. The paper is divided in three parts: in the first we introduce the concept of perturbed image and we investigate the main theoretical properties. In the second we state {epsilon}-optimality conditions for weakly convex constrained extremum problems. In the third one we study relationships between this type of image and the augmented lagrangian.

  3. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  4. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  5. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed

  6. First and second order convex approximation strategies in structural optimization

    NASA Technical Reports Server (NTRS)

    Fleury, C.

    1989-01-01

    In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.

  7. Directional Convexity and Finite Optimality Conditions.

    DTIC Science & Technology

    1984-03-01

    2664 DAAG29-80-C-0041 UNCLASSIFIED F/G 12/1 NLflflflflflII 1226 1.4 4 111111_L25__-6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU Of STANDARDS 196 A...system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...u,-) to reach a boundary point of R(T) at time T [2,7,8]. All of these conditions are obtained from a local analysis: to test the optimality of a

  8. Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron

    SciTech Connect

    Margot, F.; Fukuda, K.; Liebling, T.

    1994-12-31

    We investigate the applicability of backtrack technique for solving the vertex enumeration problem and the face enumeration problem for a convex polyhedron given by a system of linear inequalities. We show that there is a linear-time backtrack algorithm for the face enumeration problem whose space complexity is polynomial in the input size, but the vertex enumeration problem requires a backtrack algorithm to solve a decision problem, called the restricted vertex problem, for each output, which is shown to be NP-complete. Some related NP-complete problems associated with a system of linear inequalities are also discussed, including the optimal vertex problems for polyhedra and arrangements of hyperplanes.

  9. Scalable analysis of nonlinear systems using convex optimization

    NASA Astrophysics Data System (ADS)

    Papachristodoulou, Antonis

    In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of

  10. Numerical optimization method for packing regular convex polygons

    NASA Astrophysics Data System (ADS)

    Galiev, Sh. I.; Lisafina, M. S.

    2016-08-01

    An algorithm is presented for the approximate solution of the problem of packing regular convex polygons in a given closed bounded domain G so as to maximize the total area of the packed figures. On G a grid is constructed whose nodes generate a finite set W on G, and the centers of the figures to be packed can be placed only at some points of W. The problem of packing these figures with centers in W is reduced to a 0-1 linear programming problem. A two-stage algorithm for solving the resulting problems is proposed. The algorithm finds packings of the indicated figures in an arbitrary closed bounded domain on the plane. Numerical results are presented that demonstrate the effectiveness of the method.

  11. Convex-Optimization-Based Compartmental Pharmacokinetic Analysis for Prostate Tumor Characterization Using DCE-MRI.

    PubMed

    Ambikapathi, ArulMurugan; Chan, Tsung-Han; Lin, Chia-Hsiang; Yang, Fei-Shih; Chi, Chong-Yung; Wang, Yue

    2016-04-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a powerful imaging modality to study the pharmacokinetics in a suspected cancer/tumor tissue. The pharmacokinetic (PK) analysis of prostate cancer includes the estimation of time activity curves (TACs), and thereby, the corresponding kinetic parameters (KPs), and plays a pivotal role in diagnosis and prognosis of prostate cancer. In this paper, we endeavor to develop a blind source separation algorithm, namely convex-optimization-based KPs estimation (COKE) algorithm for PK analysis based on compartmental modeling of DCE-MRI data, for effective prostate tumor detection and its quantification. The COKE algorithm first identifies the best three representative pixels in the DCE-MRI data, corresponding to the plasma, fast-flow, and slow-flow TACs, respectively. The estimation accuracy of the flux rate constants (FRCs) of the fast-flow and slow-flow TACs directly affects the estimation accuracy of the KPs that provide the cancer and normal tissue distribution maps in the prostate region. The COKE algorithm wisely exploits the matrix structure (Toeplitz, lower triangular, and exponential decay) of the original nonconvex FRCs estimation problem, and reformulates it into two convex optimization problems that can reliably estimate the FRCs. After estimation of the FRCs, the KPs can be effectively estimated by solving a pixel-wise constrained curve-fitting (convex) problem. Simulation results demonstrate the efficacy of the proposed COKE algorithm. The COKE algorithm is also evaluated with DCE-MRI data of four different patients with prostate cancer and the obtained results are consistent with clinical observations.

  12. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  13. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods

    DTIC Science & Technology

    2016-11-16

    best sensors after an optimization procedure. Due to the positive definite nature of the Fisher information matrix, convex optimization may be used to...parametrized to select the best sensors after an optimization procedure. Due to the positive definite nature of the Fisher information matrix, convex op...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6180--16-9711 Using Fisher Information Criteria for Chemical Sensor Selection via Convex

  14. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  15. Convex hull based neuro-retinal optic cup ellipse optimization in glaucoma diagnosis.

    PubMed

    Zhang, Zhuo; Liu, Jiang; Cherian, Neetu Sara; Sun, Ying; Lim, Joo Hwee; Wong, Wing Kee; Tan, Ngan Meng; Lu, Shijian; Li, Huiqi; Wong, Tien Ying

    2009-01-01

    Glaucoma is the second leading cause of blindness. Glaucoma can be diagnosed through measurement of neuro-retinal optic cup-to-disc ratio (CDR). Automatic calculation of optic cup boundary is challenging due to the interweavement of blood vessels with the surrounding tissues around the cup. A Convex Hull based Neuro-Retinal Optic Cup Ellipse Optimization algorithm improves the accuracy of the boundary estimation. The algorithm's effectiveness is demonstrated on 70 clinical patient's data set collected from Singapore Eye Research Institute. The root mean squared error of the new algorithm is 43% better than the ARGALI system which is the state-of-the-art. This further leads to a large clinical evaluation of the algorithm involving 15 thousand patients from Australia and Singapore.

  16. Estimation of Saxophone Control Parameters by Convex Optimization

    PubMed Central

    Wang, Cheng-i; Smyth, Tamara; Lipton, Zachary C.

    2015-01-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value. PMID:27754493

  17. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  18. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    Here, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. Also, this volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problemmore » in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  19. Optimization-based mesh correction with volume and convexity constraints

    SciTech Connect

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-02-24

    Here, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. Also, this volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.

  20. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2017-01-20

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  1. An algorithm for LQ optimal actuator location

    NASA Astrophysics Data System (ADS)

    Darivandi, Neda; Morris, Kirsten; Khajepour, Amir

    2013-03-01

    The locations of the control hardware are typically a design variable in controller design for distributed parameter systems. In order to obtain the most efficient control system, the locations of control hardware as well as the feedback gain should be optimized. These optimization problems are generally non-convex. In addition, the models for these systems typically have a large number of degrees of freedom. Consequently, existing optimization schemes for optimal actuator placement may be inaccurate or computationally impractical. In this paper, the feedback control is chosen to be an optimal linear quadratic regulator. The optimal actuator location problem is reformulated as a convex optimization problem. A subgradient-based optimization scheme which leads to the global solution of the problem is used to optimize actuator locations. The optimization algorithm is applied to optimize the placement of piezoelectric actuators in vibration control of flexible structures. This method is compared with a genetic algorithm, and is observed to be faster and more accurate. Experiments are performed to verify the efficacy of optimal actuator placement.

  2. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  3. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  4. Ultrafast Quantum Process Tomography via Continuous Measurement and Convex Optimization

    NASA Astrophysics Data System (ADS)

    Baldwin, Charles; Riofrio, Carlos; Deutsch, Ivan

    2013-03-01

    Quantum process tomography (QPT) is an essential tool to diagnose the implementation of a dynamical map. However, the standard protocol is extremely resource intensive. For a Hilbert space of dimension d, it requires d2 different input preparations followed by state tomography via the estimation of the expectation values of d2 - 1 orthogonal observables. We show that when the process is nearly unitary, we can dramatically improve the efficiency and robustness of QPT through a collective continuous measurement protocol on an ensemble of identically prepared systems. Given the measurement history we obtain the process matrix via a convex program that optimizes a desired cost function. We study two estimators: least-squares and compressive sensing. Both allow rapid QPT due to the condition of complete positivity of the map; this is a powerful constraint to force the process to be physical and consistent with the data. We apply the method to a real experimental implementation, where optimal control is used to perform a unitary map on a d = 8 dimensional system of hyperfine levels in cesium atoms, and obtain the measurement record via Faraday spectroscopy of a laser probe. Supported by the NSF

  5. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    PubMed

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2016-12-22

    In this paper, based on CR calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  6. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  7. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.

  8. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  9. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  10. A 1 log N parallel algorithm for detecting convex hulls on image boards.

    PubMed

    Lin, J C; Lin, J Y

    1998-01-01

    By finding the maximum and minimum of {yi-mxi|1=oralgorithm to obtain the convex hull of N arbitrarily given points on an image board, The mathematical theory needed is included, and computation time is 1 log N.

  11. [3-D endocardial surface modelling based on the convex hull algorithm].

    PubMed

    Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong

    2006-11-01

    In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK.

  12. Normal Vector Projection Method used for Convex Optimization of Chan-Vese Model for Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wei, W. B.; Tan, L.; Jia, M. Q.; Pan, Z. K.

    2017-01-01

    The variational level set method is one of the main methods of image segmentation. Due to signed distance functions as level sets have to keep the nature of the functions through numerical remedy or additional technology in an evolutionary process, it is not very efficient. In this paper, a normal vector projection method for image segmentation using Chan-Vese model is proposed. An equivalent formulation of Chan-Vese model is used by taking advantage of property of binary level set functions and combining with the concept of convex relaxation. Threshold method and projection formula are applied in the implementation. It can avoid the above problems and obtain a global optimal solution. Experimental results on both synthetic and real images validate the effects of the proposed normal vector projection method, and show advantages over traditional algorithms in terms of computational efficiency.

  13. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    PubMed Central

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  14. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    PubMed

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-07-14

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  15. A tractable approximation of non-convex chance constrained optimization with non-Gaussian uncertainties

    NASA Astrophysics Data System (ADS)

    Geletu, Abebe; Klöppel, Michael; Hoffmann, Armin; Li, Pu

    2015-04-01

    Chance constrained optimization problems in engineering applications possess highly nonlinear process models and non-convex structures. As a result, solving a nonlinear non-convex chance constrained optimization (CCOPT) problem remains as a challenging task. The major difficulty lies in the evaluation of probability values and gradients of inequality constraints which are nonlinear functions of stochastic variables. This article proposes a novel analytic approximation to improve the tractability of smooth non-convex chance constraints. The approximation uses a smooth parametric function to define a sequence of smooth nonlinear programs (NLPs). The sequence of optimal solutions of these NLPs remains always feasible and converges to the solution set of the CCOPT problem. Furthermore, Karush-Kuhn-Tucker (KKT) points of the approximating problems converge to a subset of KKT points of the CCOPT problem. Another feature of this approach is that it can handle uncertainties with both Gaussian and/or non-Gaussian distributions.

  16. Perceptual convexity

    NASA Astrophysics Data System (ADS)

    Kupeev, Konstantin Y.; Wolfson, Haim J.

    1995-08-01

    Often objects which are not convex in the mathematical sense are treated as `perceptually convex'. We present an algorithm for recognition of the perceptual convexity of a 2D contour. We start by reducing the notion of `a contour is perceptually convex' to the notion of `a contour is Y-convex'. The latter reflects an absence of large concavities in the OY direction of an XOY frame. Then we represented a contour by a G-graph and modify the slowest descent-- the small leaf trimming procedure recently introduced for the estimation of shape similarity. We prove that executing the slowest descent dow to a G-graph consisting of 3 vertices allows us to detect large concavities in the OY direction. This allows us to recognize the perceptual convexity of an input contour.

  17. Maximizing protein translation rate in the non-homogeneous ribosome flow model: a convex optimization approach.

    PubMed

    Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2014-11-06

    Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics.

  18. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration

    NASA Astrophysics Data System (ADS)

    Chen, Peijun; Huang, Jianguo; Zhang, Xiaoqun

    2013-02-01

    Recently, the minimization of a sum of two convex functions has received considerable interest in a variational image restoration model. In this paper, we propose a general algorithmic framework for solving a separable convex minimization problem from the point of view of fixed point algorithms based on proximity operators (Moreau 1962 C. R. Acad. Sci., Paris I 255 2897-99). Motivated by proximal forward-backward splitting proposed in Combettes and Wajs (2005 Multiscale Model. Simul. 4 1168-200) and fixed point algorithms based on the proximity operator (FP2O) for image denoising (Micchelli et al 2011 Inverse Problems 27 45009-38), we design a primal-dual fixed point algorithm based on the proximity operator (PDFP2Oκ for κ ∈ [0, 1)) and obtain a scheme with a closed-form solution for each iteration. Using the firmly nonexpansive properties of the proximity operator and with the help of a special norm over a product space, we achieve the convergence of the proposed PDFP2Oκ algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods. Finally, we illustrate the efficiency of PDFP2Oκ through some numerical examples on image supper-resolution, computerized tomographic reconstruction and parallel magnetic resonance imaging. Generally speaking, our method PDFP2O (κ = 0) is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications.

  19. Convex Optimization of Coincidence Time Resolution for a High-Resolution PET System

    PubMed Central

    Reynolds, Paul D.; Olcott, Peter D.; Pratx, Guillem; Lau, Frances W. Y.

    2013-01-01

    We are developing a dual panel breast-dedicated positron emission tomography (PET) system using LSO scintillators coupled to position sensitive avalanche photodiodes (PSAPD). The charge output is amplified and read using NOVA RENA-3 ASICs. This paper shows that the coincidence timing resolution of the RENA-3 ASIC can be improved using certain list-mode calibrations. We treat the calibration problem as a convex optimization problem and use the RENA-3’s analog-based timing system to correct the measured data for time dispersion effects from correlated noise, PSAPD signal delays and varying signal amplitudes. The direct solution to the optimization problem involves a matrix inversion that grows order (n3) with the number of parameters. An iterative method using single-coordinate descent to approximate the inversion grows order (n). The inversion does not need to run to convergence, since any gains at high iteration number will be low compared to noise amplification. The system calibration method is demonstrated with measured pulser data as well as with two LSO-PSAPD detectors in electronic coincidence. After applying the algorithm, the 511 keV photopeak paired coincidence time resolution from the LSO-PSAPD detectors under study improved by 57%, from the raw value of 16.3 ± 0.07 ns full-width at half-maximum (FWHM) to 6.92 ± 0.02 ns FWHM (11.52 ± 0.05 ns to 4.89 ± 0.02 ns for unpaired photons). PMID:20876008

  20. Convex Optimization Methods for Graphs and Statistical Modeling

    DTIC Science & Technology

    2011-06-01

    extraneous polylogarithmic factors. In the next section we describe a new mechanism for estimating Gaussian widths, which provides near-optimal guarantees...so-called Quadratic Assignment Problem (QAP) [32]. Solving QAP is hard in general, because it includes as a special case the Hamiltonian cycle problem...only if the graph contains a Hamiltonian cycle. However there are well-studied spectral and semidefinite relaxations for QAP, which we discuss next

  1. Exact Convex Relaxation of Optimal Power Flow in Radial Networks

    SciTech Connect

    Gan, LW; Li, N; Topcu, U; Low, SH

    2015-01-01

    The optimal power flow (OPF) problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. It is nonconvex. We prove that a global optimum of OPF can be obtained by solving a second-order cone program, under a mild condition after shrinking the OPF feasible set slightly, for radial power networks. The condition can be checked a priori, and holds for the IEEE 13, 34, 37, 123-bus networks and two real-world networks.

  2. Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization

    DTIC Science & Technology

    2014-12-01

    ZHANG ¶ Abstract. It has been shown in [14] that the accelerated prox-level ( APL ) method and its variant, the uniform smoothing level (USL) method...introduce two new variants of level methods, i.e., the fast APL (FAPL) method and the fast USL (FUSL) method, for solving large scale black-box and...structured convex programming problems respectively. Both FAPL and FUSL enjoy the same optimal iteration complexity as APL and USL, while the number of

  3. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    PubMed

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  4. Bi-convex Optimization to Learn Classifiers from Multiple Biomedical Annotations.

    PubMed

    Wang, Xin; Bi, Jinbo

    2016-06-07

    The problem of constructing classifiers from multiple annotators who provide inconsistent training labels is important and occurs in many application domains. Many existing methods focus on the understanding and learning of the crowd behaviors. Several probabilistic algorithms consider the construction of classifiers for specific tasks using consensus of multiple labelers annotations. These methods impose a prior on the consensus and develop an expectation-maximization algorithm based on logistic regression loss. We extend the discussion to the hinge loss commonly used by support vector machines. Our formulations form bi-convex programs that construct classifiers and estimate the reliability of each labeler simultaneously. Each labeler is associated with a reliability parameter, which can be a constant, or class-dependent, or varies for different examples. The hinge loss is modified by replacing the true labels by the weighted combination of labelers' labels with reliabilities as weights. Statistical justification is discussed to motivate the use of linear combination of labels. In parallel to the expectation-maximization algorithm for logistic based methods, efficient alternating algorithms are developed to solve the proposed bi-convex programs. Experimental results on benchmark datasets and three real-world biomedical problems demonstrate that the proposed methods either outperform or are competitive to the state of the art.

  5. Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process

    PubMed Central

    Yin, Chuancun; Yuen, Kam Chuen; Shen, Ying

    2015-01-01

    We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655

  6. Gradient vs. approximation design optimization techniques in low-dimensional convex problems

    NASA Astrophysics Data System (ADS)

    Fedorik, Filip

    2013-10-01

    Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.

  7. Finding and proving the exact ground state of a generalized Ising model by convex optimization and MAX-SAT

    NASA Astrophysics Data System (ADS)

    Huang, Wenxuan; Kitchaev, Daniil A.; Dacek, Stephen T.; Rong, Ziqin; Urban, Alexander; Cao, Shan; Luo, Chuan; Ceder, Gerbrand

    2016-10-01

    Lattice models, also known as generalized Ising models or cluster expansions, are widely used in many areas of science and are routinely applied to the study of alloy thermodynamics, solid-solid phase transitions, magnetic and thermal properties of solids, fluid mechanics, and others. However, the problem of finding and proving the global ground state of a lattice model, which is essential for all of the aforementioned applications, has remained unresolved for relatively complex practical systems, with only a limited number of results for highly simplified systems known. In this paper, we present a practical and general algorithm that provides a provable periodically constrained ground state of a complex lattice model up to a given unit cell size and in many cases is able to prove global optimality over all other choices of unit cell. We transform the infinite-discrete-optimization problem into a pair of combinatorial optimization (MAX-SAT) and nonsmooth convex optimization (MAX-MIN) problems, which provide upper and lower bounds on the ground state energy, respectively. By systematically converging these bounds to each other, we may find and prove the exact ground state of realistic Hamiltonians whose exact solutions are difficult, if not impossible, to obtain via traditional methods. Considering that currently such practical Hamiltonians are solved using simulated annealing and genetic algorithms that are often unable to find the true global energy minimum and inherently cannot prove the optimality of their result, our paper opens the door to resolving longstanding uncertainties in lattice models of physical phenomena. An implementation of the algorithm is available at https://github.com/dkitch/maxsat-ising.

  8. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  9. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  10. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  11. Electro-Fenton oxidation of coking wastewater: optimization using the combination of central composite design and convex optimization method.

    PubMed

    Zhang, Bo; Sun, Jiwei; Wang, Qin; Fan, Niansi; Ni, Jialing; Li, Weicheng; Gao, Yingxin; Li, Yu-You; Xu, Changyou

    2017-01-12

    The electro-Fenton treatment of coking wastewater was evaluated experimentally in a batch electrochemical reactor. Based on central composite design coupled with response surface methodology, a regression quadratic equation was developed to model the total organic carbon (TOC) removal efficiency. This model was further proved to accurately predict the optimization of process variables by means of analysis of variance. With the aid of the convex optimization method, which is a global optimization method, the optimal parameters were determined as current density of 30.9 mA/cm(2), Fe(2+) concentration of 0.35 mg/L, and pH of 4.05. Under the optimized conditions, the corresponding TOC removal efficiency was up to 73.8%. The maximum TOC removal efficiency achieved can be further confirmed by the results of gas chromatography-mass spectrum analysis.

  12. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  13. Convex hull: a new method to determine the separation space used and to optimize operating conditions for comprehensive two-dimensional gas chromatography.

    PubMed

    Semard, Gaëlle; Peulon-Agasse, Valerie; Bruchet, Auguste; Bouillon, Jean-Philippe; Cardinaël, Pascal

    2010-08-13

    It is important to develop methods of optimizing the selection of column sets and operating conditions for comprehensive two-dimensional gas chromatography. A new method for the calculation of the percentage of separation space used was developed using Delaunay's triangulation algorithms (convex hull). This approach was compared with an existing method and showed better precision and accuracy. It was successfully applied to the selection of the most convenient column set and the geometrical parameters of second column for the analysis of 49 target compounds in wastewater.

  14. User-guided segmentation of preterm neonate ventricular system from 3-D ultrasound images using convex optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; McLeod, Jonathan; Chen, Yimin; de Ribaupierre, Sandrine; Fenster, Aaron

    2015-02-01

    A three-dimensional (3-D) ultrasound (US) system has been developed to monitor the intracranial ventricular system of preterm neonates with intraventricular hemorrhage (IVH) and the resultant dilation of the ventricles (ventriculomegaly). To measure ventricular volume from 3-D US images, a semi-automatic convex optimization-based approach is proposed for segmentation of the cerebral ventricular system in preterm neonates with IVH from 3-D US images. The proposed semi-automatic segmentation method makes use of the convex optimization technique supervised by user-initialized information. Experiments using 58 patient 3-D US images reveal that our proposed approach yielded a mean Dice similarity coefficient of 78.2% compared with the surfaces that were manually contoured, suggesting good agreement between these two segmentations. Additional metrics, the mean absolute distance of 0.65 mm and the maximum absolute distance of 3.2 mm, indicated small distance errors for a voxel spacing of 0.22 × 0.22 × 0.22 mm(3). The Pearson correlation coefficient (r = 0.97, p < 0.001) indicated a significant correlation of algorithm-generated ventricular system volume (VSV) with the manually generated VSV. The calculated minimal detectable difference in ventricular volume change indicated that the proposed segmentation approach with 3-D US images is capable of detecting a VSV difference of 6.5 cm(3) with 95% confidence, suggesting that this approach might be used for monitoring IVH patients' ventricular changes using 3-D US imaging. The mean segmentation times of the graphics processing unit (GPU)- and central processing unit-implemented algorithms were 50 ± 2 and 205 ± 5 s for one 3-D US image, respectively, in addition to 120 ± 10 s for initialization, less than the approximately 35 min required by manual segmentation. In addition, repeatability experiments indicated that the intra-observer variability ranges from 6.5% to 7.5%, and the inter-observer variability is 8.5% in terms

  15. Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games

    DTIC Science & Technology

    2016-05-01

    subproblems. Our approach is expected to have wide applications in continuous dynamic games, control theory problems, and elsewhere. Mathematics...differential dynamic games, control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to...rapid numerical solutions for a non-convex Hamiltonian in high dimensions, and this method is expected to have wide applications in optimal control

  16. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  17. Algorithms for optimal redundancy allocation

    SciTech Connect

    Vandenkieboom, J.; Youngblood, R.

    1993-01-01

    Heuristic and exact methods for solving the redundancy allocation problem are compared to an approach based on genetic algorithms. The various methods are applied to the bridge problem, which has been used as a benchmark in earlier work on optimization methods. Comparisons are presented in terms of the best configuration found by each method, and the computation effort which was necessary in order to find it.

  18. On the optimality of the neighbor-joining algorithm

    PubMed Central

    Eickmeyer, Kord; Huggins, Peter; Pachter, Lior; Yoshida, Ruriko

    2008-01-01

    The popular neighbor-joining (NJ) algorithm used in phylogenetics is a greedy algorithm for finding the balanced minimum evolution (BME) tree associated to a dissimilarity map. From this point of view, NJ is "optimal" when the algorithm outputs the tree which minimizes the balanced minimum evolution criterion. We use the fact that the NJ tree topology and the BME tree topology are determined by polyhedral subdivisions of the spaces of dissimilarity maps R+(n2) to study the optimality of the neighbor-joining algorithm. In particular, we investigate and compare the polyhedral subdivisions for n ≤ 8. This requires the measurement of volumes of spherical polytopes in high dimension, which we obtain using a combination of Monte Carlo methods and polyhedral algorithms. Our results include a demonstration that highly unrelated trees can be co-optimal in BME reconstruction, and that NJ regions are not convex. We obtain the l2 radius for neighbor-joining for n = 5 and we conjecture that the ability of the neighbor-joining algorithm to recover the BME tree depends on the diameter of the BME tree. PMID:18447942

  19. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  20. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results.

  1. Resistive Network Optimal Power Flow: Uniqueness and Algorithms

    SciTech Connect

    Tan, CW; Cai, DWH; Lou, X

    2015-01-01

    The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.

  2. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  3. Online estimation of lower and upper bounds for heart sound boundaries in chest sound using Convex-hull algorithm.

    PubMed

    Çağlar, F; Ozbek, I Y

    2012-01-01

    Heart sound localization in chest sound is an essential part for many heart sound cancellation algorithms. The main difficulty for heart sound localization methods is the precise determination of the onset and offset boundaries of the heart sound segment. This paper presents a novel method to estimate lower and upper bounds for the onset and offset of the heart sound segment, which can be used as anchor points for more precise estimation. For this purpose, first chest sound is divided into frames and then entropy and smoothed entropy features of these frames are extracted, and used in the Convex-hull algorithm to estimate the upper and lower bounds for heart sound boundaries. The Convex-hull algorithm constructs a special type of envelope function for entropy features and if the maximal difference between the envelope function and the entropy is larger than a certain threshold, this point is considered as a heart sound bound. The results of the proposed method are compared with a baseline method which is a modified version of a well-known heart sound localization method. The results show that the proposed method outperforms the baseline method in terms of accuracy and detection error rate. Also, the experimental results show that smoothing entropy features significantly improves the performance of both baseline and proposed methods.

  4. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  5. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-01

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient’s anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant’s RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B1+ field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient’s anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  6. Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2015-09-21

    Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient's anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant's RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B(1)(+) field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient's anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.

  7. Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming.

    DTIC Science & Technology

    1982-12-21

    Operations Research, Vol. 18, No. 1, pp. 107-118. FENCHEL , W. (1953). Convex Cones , Sets and Functions . Lecture Notes, Princeton University Press... FUNCTION AND CONVEXITY PROPERTIES OF THE SOLUTION SET MAP ..... .............. ... 40 5. CONCLUDING REMARKS ................ ...... 48 REFERENCES...I * -3- T-471 is the set conv(A) - {x1 + (l-X)x 12 XX 2 e A, A [0,1]1 . The set K CEr is a cone if x e K implies x e K for all > 0 ,and K is a convex

  8. A new algorithm for the robust optimization of rotor-bearing systems

    NASA Astrophysics Data System (ADS)

    Lopez, R. H.; Ritto, T. G.; Sampaio, Rubens; Souza de Cursi, J. E.

    2014-08-01

    This article presents a new algorithm for the robust optimization of rotor-bearing systems. The goal of the optimization problem is to find the values of a set of parameters for which the natural frequencies of the system are as far away as possible from the rotational speeds of the machine. To accomplish this, the penalization proposed by Ritto, Lopez, Sampaio, and Souza de Cursi in 2011 is employed. Since the rotor-bearing system is subject to uncertainties, such a penalization is modelled as a random variable. The robust optimization is performed by minimizing the expected value and variance of the penalization, resulting in a multi-objective optimization problem (MOP). The objective function of this MOP is known to be non-convex and it is shown that its resulting Pareto front (PF) is also non-convex. Thus, a new algorithm is proposed for solving the MOP: the normal boundary intersection (NBI) is employed to discretize the PF handling its non-convexity, and a global optimization algorithm based on a restart procedure and local searches are employed to minimize the NBI subproblems tackling the non-convexity of the objective function. A numerical analysis section shows the advantage of using the proposed algorithm over the weighted-sum (WS) and NSGA-II approaches. In comparison with the WS, the proposed approach obtains a much more even and useful set of Pareto points. Compared with the NSGA-II approach, the proposed algorithm provides a better approximation of the PF requiring much lower computational cost.

  9. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on-board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles.

  10. Optimal Multistage Algorithm for Adjoint Computation

    SciTech Connect

    Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves

    2016-01-01

    We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.

  11. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  12. Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang

    2016-12-01

    The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family

  13. Optimization Algorithms and Equilibrium Analysis for Dynamic Resource Allocation

    DTIC Science & Technology

    2012-01-31

    to derive necessary and sufficient conditions for many desirable properties of a prediction market mechanism such as proper scoring, truthful...set can be non - convex or non -connected. Our method is based on approximating a quadratic social utility optimization problem (QP) and showing that...In [2], we present a convex optimization framework that unifies these seemingly unrelated models for centrally organizing contingent claims

  14. An Optimal Class Association Rule Algorithm

    NASA Astrophysics Data System (ADS)

    Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie

    Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.

  15. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  16. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  17. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  18. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  19. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  20. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  1. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  2. Exact and Approximate Sizes of Convex Datacubes

    NASA Astrophysics Data System (ADS)

    Nedjar, Sébastien

    In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.

  3. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  4. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  5. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  6. Hierarchical particle swarm optimizer for minimizing the non-convex potential energy of molecular structure.

    PubMed

    Cheung, Ngaam J; Shen, Hong-Bin

    2014-11-01

    The stable conformation of a molecule is greatly important to uncover the secret of its properties and functions. Generally, the conformation of a molecule will be the most stable when it is of the minimum potential energy. Accordingly, the determination of the conformation can be solved in the optimization framework. It is, however, not an easy task to achieve the only conformation with the lowest energy among all the potential ones because of the high complexity of the energy landscape and the exponential computation increasing with molecular size. In this paper, we develop a hierarchical and heterogeneous particle swarm optimizer (HHPSO) to deal with the problem in the minimization of the potential energy. The proposed method is evaluated over a scalable simplified molecular potential energy function with up to 200 degrees of freedom and a realistic energy function of pseudo-ethane molecule. The experimental results are compared with other six PSO variants and four genetic algorithms. The results show HHPSO is significantly better than the compared PSOs with p-value less than 0.01277 over molecular potential energy function.

  7. Numerical Optimization of Synergetic Maneuvers

    DTIC Science & Technology

    1994-06-01

    optimality conditions is termed the Karush-Kuln-Tucker ( KKT ) conditions . These necessary ...CONVERGENCE ................................................. 15 1. Su mnary Of Convexity and Optimality Conditions .............................. 15 2...point x. A point x* is called the optimal solution to the problem. 14 B. ALGORITHMS AND CONVERGENCE 1. Summary of Convexity and Optimality Conditions

  8. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  9. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  10. Angelic Hierarchical Planning: Optimal and Online Algorithms

    DTIC Science & Technology

    2008-12-06

    describe an alternative “satisficing” algorithm, AHSS . 4.1 Abstract Lookahead Trees Our ALT data structures support our search algorithms by efficiently...Angelic Hierarchical Satisficing Search ( AHSS ), which at- tempts to find a plan that reaches the goal with at most some pre-specified cost α. AHSS can be...much more efficient than AHA*, since it can commit to a plan without first proving its optimality. At each step, AHSS (see Algorithm 3) begins by

  11. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  12. Optimizing connected component labeling algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-04-01

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.

  13. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  14. An algorithm for online optimization of accelerators

    SciTech Connect

    Huang, Xiaobiao; Corbett, Jeff; Safranek, James; Wu, Juhao

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  15. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  16. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  17. Optimal Hops-Based Adaptive Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  18. A global minimization algorithm for Tikhonov functionals with p-convex (p\\,\\geqslant \\,2) penalty terms in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Wang, Wei

    2016-10-01

    We extend the globally convergent TIGRA method in Ramlau (2003 Inverse Prob. 19 433-65) for the computation of a minimizer of the Tikhonov-type functional with the p-convex (p≥slant 2) penalty terms Θ for nonlinear forward operators in Banach spaces. The Θ are allowed to be non-smooth to include {L}p-{L}1 or {L}p- TV (total variation) functionals, which are significant in reconstructing special features of solutions such as sparsity and discontinuities. The proposed TIGRA-Θ method uses a dual gradient descent method in the inner iteration and linearly decreases the regularization parameter in the outer iteration. We present the global convergence analysis for the algorithm under suitable parameter selections, and the convergence rate results are provided under both a priori and a posteriori stopping rules. Two numerical examples—an auto-convolution problem and a parameter identification problem—are presented to illustrate the theoretic analysis and verify the effectiveness of the method.

  19. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  20. An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.

    PubMed

    Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed

    2015-10-01

    Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.

  1. BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster

    NASA Astrophysics Data System (ADS)

    Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi

    2007-12-01

    This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.

  2. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  3. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  4. The effects of initial conditions and control time on optimal actuator placement via a max-min Genetic Algorithm

    SciTech Connect

    Redmond, J.; Parker, G.

    1993-07-01

    This paper examines the role of the control objective and the control time in determining fuel-optimal actuator placement for structural vibration suppression. A general theory is developed that can be easily extended to include alternative performance metrics such as energy and time-optimal control. The performance metric defines a convex admissible control set which leads to a max-min optimization problem expressing optimal location as a function of initial conditions and control time. A solution procedure based on a nested Genetic Algorithm is presented and applied to an example problem. Results indicate that the optimal locations vary widely as a function of control time and initial conditions.

  5. Protein structure optimization with a "Lamarckian" ant colony algorithm.

    PubMed

    Oakley, Mark T; Richardson, E Grace; Carr, Harriet; Johnston, Roy L

    2013-01-01

    We describe the LamarckiAnt algorithm: a search algorithm that combines the features of a "Lamarckian" genetic algorithm and ant colony optimization. We have implemented this algorithm for the optimization of BLN model proteins, which have frustrated energy landscapes and represent a challenge for global optimization algorithms. We demonstrate that LamarckiAnt performs competitively with other state-of-the-art optimization algorithms.

  6. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  7. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s

  8. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  9. Combinatorial Multiobjective Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Martin. Eric T.

    2002-01-01

    The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.

  10. Optimized TRIAD Algorithm for Attitude Determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    TRIAD is a well known simple algorithm that generates the attitude matrix between two coordinate systems when the components of two abstract vectors are given in the two systems. TRIAD however, is sensitive to the order in which the algorithm handles the vectors, such that the resulting attitude matrix is influenced more by the vector processed first. In this work we present a new algorithm, which we call Optimized TRIAD, that blends in a specified manner the two matrices generated by TRIAD when processing one vector first, and then when processing the other vector first. On the average, Optimized TRIAD yields a matrix which is better than either one of the two matrices in that is ti the closest to the correct matrix. This result is demonstrated through simulation.

  11. Algorithm for fixed-range optimal trajectories

    NASA Technical Reports Server (NTRS)

    Lee, H. Q.; Erzberger, H.

    1980-01-01

    An algorithm for synthesizing optimal aircraft trajectories for specified range was developed and implemented in a computer program written in FORTRAN IV. The algorithm, its computer implementation, and a set of example optimum trajectories for the Boeing 727-100 aircraft are described. The algorithm optimizes trajectories with respect to a cost function that is the weighted sum of fuel cost and time cost. The optimum trajectory consists at most of a three segments: climb, cruise, and descent. The climb and descent profiles are generated by integrating a simplified set of kinematic and dynamic equations wherein the total energy of the aircraft is the independent or time like variable. At each energy level the optimum airspeeds and thrust settings are obtained as the values that minimize the variational Hamiltonian. Although the emphasis is on an off-line, open-loop computation, eventually the most important application will be in an on-board flight management system.

  12. An efficient algorithm for numerical airfoil optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1979-01-01

    A new optimization algorithm is presented. The method is based on sequential application of a second-order Taylor's series approximation to the airfoil characteristics. Compared to previous methods, design efficiency improvements of more than a factor of 2 are demonstrated. If multiple optimizations are performed, the efficiency improvements are more dramatic due to the ability of the technique to utilize existing data. The method is demonstrated by application to subsonic and transonic airfoil design but is a general optimization technique and is not limited to a particular application or aerodynamic analysis.

  13. Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.

    PubMed

    Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A

    2016-04-12

    The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired.

  14. Convex Modeling of Interactions with Strong Heredity

    PubMed Central

    Haris, Asad; Witten, Daniela; Simon, Noah

    2015-01-01

    We consider the task of fitting a regression model involving interactions among a potentially large set of covariates, in which we wish to enforce strong heredity. We propose FAMILY, a very general framework for this task. Our proposal is a generalization of several existing methods, such as VANISH [Radchenko and James, 2010], hierNet [Bien et al., 2013], the all-pairs lasso, and the lasso using only main effects. It can be formulated as the solution to a convex optimization problem, which we solve using an efficient alternating directions method of multipliers (ADMM) algorithm. This algorithm has guaranteed convergence to the global optimum, can be easily specialized to any convex penalty function of interest, and allows for a straightforward extension to the setting of generalized linear models. We derive an unbiased estimator of the degrees of freedom of FAMILY, and explore its performance in a simulation study and on an HIV sequence data set.

  15. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    DTIC Science & Technology

    2014-09-01

    convex because the binary side constraints (16) are non- convex . We show that the binary constraints can be replaced by their convex hull [0, 1] to...high dimen- sional data into two classes. It combines recent convex optimization methods from imaging with recent graph based variational models for data...seg- mentation. Two convex splitting algorithms are proposed, where graph-based PDE techniques are used to solve some of the subproblems. It is shown

  16. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  17. Sequential unconstrained minimization algorithms for constrained optimization

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  18. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  19. Optimized dynamical decoupling via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory; Lidar, Daniel A.

    2013-11-01

    We utilize genetic algorithms aided by simulated annealing to find optimal dynamical decoupling (DD) sequences for a single-qubit system subjected to a general decoherence model under a variety of control pulse conditions. We focus on the case of sequences with equal pulse intervals and perform the optimization with respect to pulse type and order. In this manner, we obtain robust DD sequences, first in the limit of ideal pulses, then when including pulse imperfections such as finite-pulse duration and qubit rotation (flip-angle) errors. Although our optimization is numerical, we identify a deterministic structure that underlies the top-performing sequences. We use this structure to devise DD sequences which outperform previously designed concatenated DD (CDD) and quadratic DD (QDD) sequences in the presence of pulse errors. We explain our findings using time-dependent perturbation theory and provide a detailed scaling analysis of the optimal sequences.

  20. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and non-convex optimization.

    PubMed

    Ning, Bende; Qu, Xiaobo; Guo, Di; Hu, Changwei; Chen, Zhong

    2013-11-01

    Reducing scanning time is significantly important for MRI. Compressed sensing has shown promising results by undersampling the k-space data to speed up imaging. Sparsity of an image plays an important role in compressed sensing MRI to reduce the image artifacts. Recently, the method of patch-based directional wavelets (PBDW) which trains geometric directions from undersampled data has been proposed. It has better performance in preserving image edges than conventional sparsifying transforms. However, obvious artifacts are presented in the smooth region when the data are highly undersampled. In addition, the original PBDW-based method does not hold obvious improvement for radial and fully 2D random sampling patterns. In this paper, the PBDW-based MRI reconstruction is improved from two aspects: 1) An efficient non-convex minimization algorithm is modified to enhance image quality; 2) PBDW are extended into shift-invariant discrete wavelet domain to enhance the ability of transform on sparsifying piecewise smooth image features. Numerical simulation results on vivo magnetic resonance images demonstrate that the proposed method outperforms the original PBDW in terms of removing artifacts and preserving edges.

  1. Polynomial Local Improvement Algorithms in Combinatorial Optimization.

    DTIC Science & Technology

    1981-11-01

    NUMBER SOL 81- 21 IIS -J O 15 14. TITLE (am#Su&Utl & YEO RPR ERO OEE Polynomial Local Improvement Algorithms in TcnclRpr Combinatorial Optimization 6...Stanford, CA 94305 II . CONTROLLING OFFICE NAME AND ADDRESS It. REPORT DATE Office of Naval Research - Dept. of the Navy November 1981 800 N. Qu~incy Street...corresponds to a node of the tree. ii ) The father of a vertex is its optimal adjacent vertex; if a vertex is a local optimum, it has no father. The tree is

  2. FOGSAA: Fast Optimal Global Sequence Alignment Algorithm

    NASA Astrophysics Data System (ADS)

    Chakraborty, Angana; Bandyopadhyay, Sanghamitra

    2013-04-01

    In this article we propose a Fast Optimal Global Sequence Alignment Algorithm, FOGSAA, which aligns a pair of nucleotide/protein sequences faster than any optimal global alignment method including the widely used Needleman-Wunsch (NW) algorithm. FOGSAA is applicable for all types of sequences, with any scoring scheme, and with or without affine gap penalty. Compared to NW, FOGSAA achieves a time gain of (70-90)% for highly similar nucleotide sequences (> 80% similarity), and (54-70)% for sequences having (30-80)% similarity. For other sequences, it terminates with an approximate score. For protein sequences, the average time gain is between (25-40)%. Compared to three heuristic global alignment methods, the quality of alignment is improved by about 23%-53%. FOGSAA is, in general, suitable for aligning any two sequences defined over a finite alphabet set, where the quality of the global alignment is of supreme importance.

  3. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1990-01-01

    The optimization of space operations is examined in the light of optimization heuristics for computer algorithms and iterative search techniques. Specific attention is given to the search concepts known collectively as intelligent perturbation algorithms (IPAs) and their application to crew/resource allocation problems. IPAs iteratively examine successive schedules which become progressively more efficient, and the characteristics of good perturbation operators are listed. IPAs can be applied to aerospace systems to efficiently utilize crews, payloads, and resources in the context of systems such as Space-Station scheduling. A program is presented called the MFIVE Space Station Scheduling Worksheet which generates task assignments and resource usage structures. The IPAs can be used to develop flexible manifesting and scheduling for the Industrial Space Facility.

  4. Optical flow optimization using parallel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe

    2011-06-01

    A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.

  5. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  6. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  7. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  8. Accurate quantification of local changes for carotid arteries in 3D ultrasound images using convex optimization-based deformable registration

    NASA Astrophysics Data System (ADS)

    Cheng, Jieyu; Qiu, Wu; Yuan, Jing; Fenster, Aaron; Chiu, Bernard

    2016-03-01

    Registration of longitudinally acquired 3D ultrasound (US) images plays an important role in monitoring and quantifying progression/regression of carotid atherosclerosis. We introduce an image-based non-rigid registration algorithm to align the baseline 3D carotid US with longitudinal images acquired over several follow-up time points. This algorithm minimizes the sum of absolute intensity differences (SAD) under a variational optical-flow perspective within a multi-scale optimization framework to capture local and global deformations. Outer wall and lumen were segmented manually on each image, and the performance of the registration algorithm was quantified by Dice similarity coefficient (DSC) and mean absolute distance (MAD) of the outer wall and lumen surfaces after registration. In this study, images for 5 subjects were registered initially by rigid registration, followed by the proposed algorithm. Mean DSC generated by the proposed algorithm was 79:3+/-3:8% for lumen and 85:9+/-4:0% for outer wall, compared to 73:9+/-3:4% and 84:7+/-3:2% generated by rigid registration. Mean MAD of 0:46+/-0:08mm and 0:52+/-0:13mm were generated for lumen and outer wall respectively by the proposed algorithm, compared to 0:55+/-0:08mm and 0:54+/-0:11mm generated by rigid registration. The mean registration time of our method per image pair was 143+/-23s.

  9. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    PubMed

    Lee, Chae Young; Song, Hankyeol; Park, Chan Woo; Chung, Yong Hyun; Kim, Jin Sung; Park, Justin C

    2016-01-01

    The purposes of this study were to optimize a proton computed tomography system (pCT) for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT) 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  10. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy

    PubMed Central

    Lee, Chae Young; Song, Hankyeol; Park, Chan Woo; Chung, Yong Hyun; Park, Justin C.

    2016-01-01

    The purposes of this study were to optimize a proton computed tomography system (pCT) for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT) 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy. PMID:27243822

  11. Unification of algorithms for minimum mode optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-01

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  12. Unification of algorithms for minimum mode optimization.

    PubMed

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-28

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  13. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  14. OPC recipe optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asthana, Abhishek; Wilkinson, Bill; Power, Dave

    2016-03-01

    Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.

  15. Lunar Habitat Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.

  16. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  17. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  18. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2016-08-31

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. Firstly, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve nonrigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  19. Expedite Particle Swarm Optimization Algorithm (EPSO) for Optimization of MSA

    NASA Astrophysics Data System (ADS)

    Rathi, Amit; Vijay, Ritu

    This paper presents a new designing method of Rectangular patch Microstrip Antenna using an Artificial searches Algorithm with some constraints. It requires two stages for designing. In first stage, bandwidth of MSA is modeled using bench Mark function. In second stage, output of first stage give to modified Artificial search Algorithm which is Particle Swarm Algorithm (PSO) as input and get output in the form of five parameter- dimensions width, frequency range, dielectric loss tangent, length over a ground plane with a substrate thickness and electrical thickness. In PSO Cognition, factor and Social learning Factor give very important effect on balancing the local search and global search in PSO. Basing the modification of cognition factor and social learning factor, this paper presents the strategy that at the starting process cognition-learning factor has more effect then social learning factor. Gradually social learning factor has more impact after learning cognition factor for find out global best. The aim is to find out under above circumstances these modifications in PSO can give better result for optimization of microstrip Antenna (MSA).

  20. A Modified BFGS Formula Using a Trust Region Model for Nonsmooth Convex Minimizations

    PubMed Central

    Cui, Zengru; Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie; Wang, Xiaoliang; Duan, Xiabin

    2015-01-01

    This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than using second-order information of the function, thus decreasing the workload and time involved in the computation. Under suitable conditions, the algorithm converges globally to an optimal solution. Numerical results show that this algorithm can successfully solve nonsmooth unconstrained convex problems. PMID:26501775

  1. PDE Nozzle Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Billings, Dana; Turner, James E. (Technical Monitor)

    2000-01-01

    Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.

  2. Optimizing doped libraries by using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Tomandl, Dirk; Schober, Andreas; Schwienhorst, Andreas

    1997-01-01

    The insertion of random sequences into protein-encoding genes in combination with biologicalselection techniques has become a valuable tool in the design of molecules that have usefuland possibly novel properties. By employing highly effective screening protocols, a functionaland unique structure that had not been anticipated can be distinguished among a hugecollection of inactive molecules that together represent all possible amino acid combinations.This technique is severely limited by its restriction to a library of manageable size. Oneapproach for limiting the size of a mutant library relies on `doping schemes', where subsetsof amino acids are generated that reveal only certain combinations of amino acids in a proteinsequence. Three mononucleotide mixtures for each codon concerned must be designed, suchthat the resulting codons that are assembled during chemical gene synthesis represent thedesired amino acid mixture on the level of the translated protein. In this paper we present adoping algorithm that `reverse translates' a desired mixture of certain amino acids into threemixtures of mononucleotides. The algorithm is designed to optimally bias these mixturestowards the codons of choice. This approach combines a genetic algorithm with localoptimization strategies based on the downhill simplex method. Disparate relativerepresentations of all amino acids (and stop codons) within a target set can be generated.Optional weighing factors are employed to emphasize the frequencies of certain amino acidsand their codon usage, and to compensate for reaction rates of different mononucleotidebuilding blocks (synthons) during chemical DNA synthesis. The effect of statistical errors thataccompany an experimental realization of calculated nucleotide mixtures on the generatedmixtures of amino acids is simulated. These simulations show that the robustness of differentoptima with respect to small deviations from calculated values depends on their concomitantfitness. Furthermore

  3. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  4. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  5. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  6. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  7. Crystal-structure prediction via the floppy-box Monte Carlo algorithm: method and application to hard (non)convex particles.

    PubMed

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-07

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  8. GENERALIZED CONVEXITY CONES.

    DTIC Science & Technology

    Contents: Introduction The dual cone of C (psi sub 1,..., psi sub n) Extreme rays The cone dual to an intersection of generalized convexity cones... Generalized difference quotients and multivariate convexity Miscellaneous applications of generalized convexity.

  9. Algorithm for correcting optimization convergence errors in Eclipse.

    PubMed

    Zacarias, Albert S; Mills, Michael D

    2009-10-14

    IMRT plans generated in Eclipse use a fast algorithm to evaluate dose for optimization and a more accurate algorithm for a final dose calculation, the Analytical Anisotropic Algorithm. The use of a fast optimization algorithm introduces optimization convergence errors into an IMRT plan. Eclipse has a feature where optimization may be performed on top of an existing base plan. This feature allows for the possibility of arriving at a recursive solution to optimization that relies on the accuracy of the final dose calculation algorithm and not the optimizer algorithm. When an IMRT plan is used as a base plan for a second optimization, the second optimization can compensate for heterogeneity and modulator errors in the original base plan. Plans with the same field arrangement as the initial base plan may be added together by adding the initial plan optimal fluence to the dose correcting plan optimal fluence.A simple procedure to correct for optimization errors is presented that may be implemented in the Eclipse treatment planning system, along with an Excel spreadsheet to add optimized fluence maps together.

  10. Convex Models of Malfunction Diagnosis in High Performance Aircraft

    DTIC Science & Technology

    1989-05-01

    initiated as in the open-loop mode: with one fixed non -zero control function. The time-dependent controller is actuated as soon as any of the state ... controllers ) the diagnosis algorithm is designed by solving 8 CONCLUI)ING REMARKS AND FUTURE RESEARCH 70 a sequence of linear optimization problems . For...Automatic Controller ............... 8 3.3 Numerical Demonstration of the Normal Dynamics ............ 8 4 Representing Control - Actuator Failure 16 5 Convex

  11. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  12. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures

    PubMed Central

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-01-01

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called “TVDS”) is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image. PMID:27941635

  13. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  14. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  15. Applying fuzzy clustering optimization algorithm to extracting traffic spatial pattern

    NASA Astrophysics Data System (ADS)

    Hu, Chunchun; Shi, Wenzhong; Meng, Lingkui; Liu, Min

    2009-10-01

    Traditional analytical methods for traffic information can't meet to need of intelligent traffic system. Mining value-add information can deal with more traffic problems. The paper exploits a new clustering optimization algorithm to extract useful spatial clustered pattern for predicting long-term traffic flow from macroscopic view. Considering the sensitivity of initial parameters and easy falling into local extreme in FCM algorithm, the new algorithm applies Particle Swarm Optimization method, which can discovery the globe optimal result, to the FCM algorithm. And the algorithm exploits the union of the clustering validity index and objective function of the FCM algorithm as the fitness function of the PSO algorithm. The experimental result indicates that it is effective and efficient. For fuzzy clustering of road traffic data, it can produce useful spatial clustered pattern. And the clustered centers represent the locations which have heavy traffic flow. Moreover, the parameters of the patterns can provide intelligent traffic system with assistant decision support.

  16. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  17. Transonic Wing Shape Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.

  18. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  19. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  20. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  1. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    SciTech Connect

    Mehrotra, Sanjay

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting our main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.

  2. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  3. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  4. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  5. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  6. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  7. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  8. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  9. Evaluation of a particle swarm algorithm for biomechanical optimization.

    PubMed

    Schutte, Jaco F; Koh, Byung-Il; Reinbolt, Jeffrey A; Haftka, Raphael T; George, Alan D; Fregly, Benjamin J

    2005-06-01

    Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm's global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms--a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units.

  10. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  11. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  12. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  13. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  14. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  15. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  16. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    SciTech Connect

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  17. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  18. Fast-convergence superpixel algorithm via an approximate optimization

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2016-09-01

    We propose an optimization scheme that achieves fast yet accurate computation of superpixels from an image. Our optimization is designed to improve the efficiency and robustness for the minimization of a composite energy functional in the expectation-minimization (EM) framework where we restrict the update of an estimate to avoid redundant computations. We consider a superpixel energy formulation that consists of L2-norm for the spatial regularity and L1-norm for the data fidelity in the demonstration of the robustness of the proposed algorithm. The quantitative and qualitative evaluations indicate that our superpixel algorithm outperforms SLIC and SEEDS algorithms. It is also demonstrated that our algorithm guarantees the convergence with less computational cost by up to 89% on average compared to the SLIC algorithm while preserving the accuracy. Our optimization scheme can be easily extended to other applications in which the alternating minimization is applicable in the EM framework.

  19. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  20. Path Optimization for Single and Multiple Searchers: Models and Algorithms

    DTIC Science & Technology

    2008-09-01

    the k-th it- eration of Algorithm 11, the master problem MP4 (k) defined below is solved. The optimal value and optimal solution of MP4 (k) are denoted z...k) and y(k), respectively. In each iteration of Algorithm 11, U cuts are generated at once. Formulation of Master problem : MP4 (k) min z = ∑U u=1...master problem MP4 (k), and obtain its optimal value z(k) and optimal solution y(k). If z(k) > q, then q = z(k). Step 3. Calculate fu(y (k)) and fu(y (k

  1. An algorithm for the systematic disturbance of optimal rotational solutions

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kaiser, Mary K.

    1989-01-01

    An algorithm for introducing a systematic rotational disturbance into an optimal (i.e., single axis) rotational trajectory is described. This disturbance introduces a motion vector orthogonal to the quaternion-defined optimal rotation axis. By altering the magnitude of this vector, the degree of non-optimality can be controlled. The metric properties of the distortion parameter are described, with analogies to two-dimensional translational motion. This algorithm was implemented in a motion-control program on a three-dimensional graphic workstation. It supports a series of human performance studies on the detectability of rotational trajectory optimality by naive observers.

  2. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  3. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  4. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  5. PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm

    PubMed Central

    Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198

  6. PCB drill path optimization by combinatorial cuckoo search algorithm.

    PubMed

    Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process.

  7. Shape Optimization of Cochlear Implant Electrode Array Using Genetic Algorithms

    DTIC Science & Technology

    2007-11-02

    Shape Optimization of Cochlear Implant Electrode Array using Genetic Algorithms Charles T.M. Choi, Ph.D., senior member, IEEE Department of...c.t.choi@ieee.org Abstract−Finite element analysis is used to compute the current distribution of the human cochlea during cochlear implant electrical...stimulation. Genetic algorithms are then applied in conjunction with the finite element analysis to optimize the shape of cochlear implant electrode array

  8. Superscattering of light optimized by a genetic algorithm

    SciTech Connect

    Mirzaei, Ali Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.

    2014-07-07

    We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.

  9. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  10. Differential evolution algorithm for global optimizations in nuclear physics

    NASA Astrophysics Data System (ADS)

    Qi, Chong

    2017-04-01

    We explore the applicability of the differential evolution algorithm in finding the global minima of three typical nuclear structure physics problems: the global deformation minimum in the nuclear potential energy surface, the optimization of mass model parameters and the lowest eigenvalue of a nuclear Hamiltonian. The algorithm works very effectively and efficiently in identifying the minima in all problems we have tested. We also show that the algorithm can be parallelized in a straightforward way.

  11. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  12. Relaxed controls and the convergence of optimal control algorithms

    NASA Technical Reports Server (NTRS)

    Williamson, L. J.; Polak, E.

    1976-01-01

    This paper presents a framework for the study of the convergence properties of optimal control algorithms and illustrates its use by means of two examples. The framework consists of an algorithm prototype with a convergence theorem, together with some results in relaxed controls theory.

  13. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  14. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  15. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  16. Uniformly convex and strictly convex Orlicz spaces

    NASA Astrophysics Data System (ADS)

    Masta, Al Azhary

    2016-02-01

    In this paper we define the new norm of Orlicz spaces on ℝn through a multiplication operator on an old Orlicz spaces. We obtain some necessary and sufficient conditions that the new norm to be a uniformly convex and strictly convex spaces.

  17. Multi-class DTI Segmentation: A Convex Approach.

    PubMed

    Xie, Yuchen; Chen, Ting; Ho, Jeffrey; Vemuri, Baba C

    2012-10-01

    In this paper, we propose a novel variational framework for multi-class DTI segmentation based on global convex optimization. The existing variational approaches to the DTI segmentation problem have mainly used gradient-descent type optimization techniques which are slow in convergence and sensitive to the initialization. This paper on the other hand provides a new perspective on the often difficult optimization problem in DTI segmentation by providing a reasonably tight convex approximation (relaxation) of the original problem, and the relaxed convex problem can then be efficiently solved using various methods such as primal-dual type algorithms. To the best of our knowledge, such a DTI segmentation technique has never been reported in literature. We also show that a variety of tensor metrics (similarity measures) can be easily incorporated in the proposed framework. Experimental results on both synthetic and real diffusion tensor images clearly demonstrate the advantages of our method in terms of segmentation accuracy and robustness. In particular, when compared with existing state-of-the-art methods, our results demonstrate convincingly the importance as well as the benefit of using more refined and elaborated optimization method in diffusion tensor MR image segmentation.

  18. Global search algorithm for optimal control

    NASA Technical Reports Server (NTRS)

    Brocker, D. H.; Kavanaugh, W. P.; Stewart, E. C.

    1970-01-01

    Random-search algorithm employs local and global properties to solve two-point boundary value problem in Pontryagin maximum principle for either fixed or variable end-time problems. Mixed boundary value problem is transformed to an initial value problem. Mapping between initial and terminal values utilizes hybrid computer.

  19. Optimization of deep learning algorithms for object classification

    NASA Astrophysics Data System (ADS)

    Horváth, András.

    2017-02-01

    Deep learning is currently the state of the art algorithm for image classification. The complexity of these feedforward neural networks have overcome a critical point, resulting algorithmic breakthroughs in various fields. On the other hand their complexity makes them executable in tasks, where High-throughput computing powers are available. The optimization of these networks -considering computational complexity and applicability on embedded systems- has not yet been studied and investigated in details. In this paper I show some examples how this algorithms can be optimized and accelerated on embedded systems.

  20. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.

  1. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  2. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  3. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  4. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  5. A Discrete Lagrangian Algorithm for Optimal Routing Problems

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    The ideas of discrete Lagrangian methods for conservative systems are exploited for the construction of algorithms applicable in optimal ship routing problems. The algorithm presented here is based on the discretisation of Hamilton's principle of stationary action Lagrangian and specifically on the direct discretization of the Lagrange-Hamilton principle for a conservative system. Since, in contrast to the differential equations, the discrete Euler-Lagrange equations serve as constrains for the optimization of a given cost functional, in the present work we utilize this feature in order to minimize the cost function for optimal ship routing.

  6. Optimal Configuration of a Square Array Group Testing Algorithm

    PubMed Central

    Hudgens, Michael G.; Kim, Hae-Young

    2009-01-01

    We consider the optimal configuration of a square array group testing algorithm (denoted A2) to minimize the expected number of tests per specimen. For prevalence greater than 0.2498, individual testing is shown to be more efficient than A2. For prevalence less than 0.2498, closed form lower and upper bounds on the optimal group sizes for A2 are given. Arrays of dimension 2 × 2, 3 × 3, and 4 × 4 are shown to never be optimal. The results are illustrated by considering the design of a specimen pooling algorithm for detection of recent HIV infections in Malawi. PMID:21218195

  7. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  8. Multidisciplinary Optimization of Airborne Radome Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Tang, Xinggang; Zhang, Weihong; Zhu, Jihong

    A multidisciplinary optimization scheme of airborne radome is proposed. The optimization procedure takes into account the structural and the electromagnetic responses simultaneously. The structural analysis is performed with the finite element method using Patran/Nastran, while the electromagnetic analysis is carried out using the Plane Wave Spectrum and Surface Integration technique. The genetic algorithm is employed for the multidisciplinary optimization process. The thicknesses of multilayer radome wall are optimized to maximize the overall transmission coefficient of the antenna-radome system under the constraint of the structural failure criteria. The proposed scheme and the optimization approach are successfully assessed with an illustrative numerical example.

  9. OPTIMIZATION OF LONG RURAL FEEDERS USING A GENETIC ALGORITHM

    SciTech Connect

    Wishart, Michael; Ledwich, Gerard; Ghosh, Arindam; Ivanovich, Grujica

    2010-06-15

    This paper describes the optimization of conductor size and the voltage regulator location and magnitude of long rural distribution lines. The optimization minimizes the lifetime cost of the lines, including capital costs and losses while observing voltage drop and operational constraints using a Genetic Algorithm (GA). The GA optimization is applied to a real Single Wire Earth Return (SWER) network in regional Queensland and results are presented.

  10. Sequential Quadratic Programming Algorithms for Optimization

    DTIC Science & Technology

    1989-08-01

    brief history of the evolution of SQP algorithms. Surveys for this area can be found in [GMWSl]. (Po831 or fGNISW𔄂 ,] for example. The origins Ihe...0) S (TnI(P(O) K __jnfl’flj)j 2 < 0. lhe adjust uncut of thleslack variables. s in step (Ii) oft he algorith (-ii a ii only lvad to a fu rt her red

  11. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  12. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  13. A Hybrid Ant Colony Algorithm for Loading Pattern Optimization

    NASA Astrophysics Data System (ADS)

    Hoareau, F.

    2014-06-01

    Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.

  14. Approximating convex Pareto surfaces in multiobjective radiotherapy planning

    SciTech Connect

    Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.

    2006-09-15

    Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing.

  15. Splitting Methods for Convex Clustering

    PubMed Central

    Chi, Eric C.; Lange, Kenneth

    2016-01-01

    Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplemental materials available online. PMID:27087770

  16. A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms

    PubMed Central

    Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  17. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  18. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  19. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  20. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm.

    PubMed

    Wang, Jiaxi; Lin, Boliang; Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.

  1. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  2. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  3. Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.

    2006-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.

  4. Optimization of computer-generated binary holograms using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Cojoc, Dan; Alexandrescu, Adrian

    1999-11-01

    The aim of this paper is to compare genetic algorithms against direct point oriented coding in the design of binary phase Fourier holograms, computer generated. These are used as fan-out elements for free space optical interconnection. Genetic algorithms are optimization methods which model the natural process of genetic evolution. The configuration of the hologram is encoded to form a chromosome. To start the optimization, a population of different chromosomes randomly generated is considered. The chromosomes compete, mate and mutate until the best chromosome is obtained according to a cost function. After explaining the operators that are used by genetic algorithms, this paper presents two examples with 32 X 32 genes in a chromosome. The crossover type and the number of mutations are shown to be important factors which influence the convergence of the algorithm. GA is demonstrated to be a useful tool to design namely binary phase holograms of complicate structures.

  5. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  6. Multi-objective Optimization on Helium Liquefier Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, H. R.; Xiong, L. Y.; Peng, N.; Meng, Y. R.; Liu, L. Q.

    2017-02-01

    Research on optimization of helium liquefier is limited at home and abroad, and most of the optimization is single-objective based on Collins cycle. In this paper, a multi-objective optimization is conducted using genetic algorithm (GA) on the 40 L/h helium liquefier developed by Technical Institute of Physics and Chemistry of the Chinese Academy of Science (TIPC, CAS), steady solutions are obtained in the end. In addition, the exergy loss of the optimized system is studied in the case of with and without liquid nitrogen pre-cooling. The results have guiding significance for the future design of large helium liquefier.

  7. Swarm algorithms with chaotic jumps for optimization of multimodal functions

    NASA Astrophysics Data System (ADS)

    Krohling, Renato A.; Mendel, Eduardo; Campos, Mauro

    2011-11-01

    In this article, the use of some well-known versions of particle swarm optimization (PSO) namely the canonical PSO, the bare bones PSO (BBPSO) and the fully informed particle swarm (FIPS) is investigated on multimodal optimization problems. A hybrid approach which consists of swarm algorithms combined with a jump strategy in order to escape from local optima is developed and tested. The jump strategy is based on the chaotic logistic map. The hybrid algorithm was tested for all three versions of PSO and simulation results show that the addition of the jump strategy improves the performance of swarm algorithms for most of the investigated optimization problems. Comparison with the off-the-shelf PSO with local topology (l best model) has also been performed and indicates the superior performance of the standard PSO with chaotic jump over the standard both using local topology (l best model).

  8. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  9. An algorithm for optimal structural design with frequency constraints

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.; Shaw, R. C. J.

    1978-01-01

    The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.

  10. Benchmarking derivative-free optimization algorithms.

    SciTech Connect

    More', J. J.; Wild, S. M.; Mathematics and Computer Science; Cornell Univ.

    2009-01-01

    We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.

  11. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    SciTech Connect

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  12. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  13. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  14. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  15. Multifrequency and multidirection optimizations of antenna arrays using heuristic algorithms and the multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Önol, Can; Alkış, Sena; Gökçe, Özer; Ergül, Özgür

    2016-07-01

    We consider fast and efficient optimizations of arrays involving three-dimensional antennas with arbitrary shapes and geometries. Heuristic algorithms, particularly genetic algorithms, are used for optimizations, while the required solutions are carried out accurately and efficiently via the multilevel fast multipole algorithm (MLFMA). The superposition principle is employed to reduce the number of MLFMA solutions to the number of array elements per frequency. The developed mechanism is used to optimize arrays for multifrequency and/or multidirection operations, i.e., to find the most suitable set of antenna excitations for desired radiation characteristics simultaneously at different frequencies and/or directions. The capabilities of the optimization environment are demonstrated on arrays of bowtie and Vivaldi antennas.

  16. New near-optimal feedback guidance algorithms for space missions

    NASA Astrophysics Data System (ADS)

    Hawkins, Matthew Jay

    This dissertation describes several different spacecraft guidance algorithms, with applications including asteroid intercept and rendezvous, planetary landing, and orbital transfer. A comprehensive review of spacecraft guidance algorithms for asteroid intercept and rendezvous. Zero-Effort-Miss/Zero-Effort-Velocity (ZEM/ZEV) guidance is introduced and applied to asteroid intercept and rendezvous, and to a wealth of different example problems, including missile intercept, planetary landing, and orbital transfer. It is seen that the ZEM/ZEV guidance law can be used in many different scenarios, and that it provides near-optimal performance where an analytical optimal guidance law does not exist, such as in a non-linear gravity field.

  17. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  18. Effective and efficient algorithm for multiobjective optimization of hydrologic models

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Bastidas, Luis A.; Bouten, Willem; Sorooshian, Soroosh

    2003-08-01

    Practical experience with the calibration of hydrologic models suggests that any single-objective function, no matter how carefully chosen, is often inadequate to properly measure all of the characteristics of the observed data deemed to be important. One strategy to circumvent this problem is to define several optimization criteria (objective functions) that measure different (complementary) aspects of the system behavior and to use multicriteria optimization to identify the set of nondominated, efficient, or Pareto optimal solutions. In this paper, we present an efficient and effective Markov Chain Monte Carlo sampler, entitled the Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm, which is capable of solving the multiobjective optimization problem for hydrologic models. MOSCEM is an improvement over the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm, using the concept of Pareto dominance (rather than direct single-objective function evaluation) to evolve the initial population of points toward a set of solutions stemming from a stable distribution (Pareto set). The efficacy of the MOSCEM-UA algorithm is compared with the original MOCOM-UA algorithm for three hydrologic modeling case studies of increasing complexity.

  19. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  20. Optimization Algorithm for the Generation of ONCV Pseudopotentials

    NASA Astrophysics Data System (ADS)

    Schlipf, Martin; Gygi, Francois

    2015-03-01

    We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z=83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials provide a good agreement with the all-electron results obtained using the FLEUR code with a plane-wave energy cutoff of approximately 60 Ry. Supported by DOE/BES Grant DE-SC0008938.

  1. Optimization algorithm for the generation of ONCV pseudopotentials

    NASA Astrophysics Data System (ADS)

    Schlipf, Martin; Gygi, François

    2015-11-01

    We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z = 83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials

  2. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    NASA Astrophysics Data System (ADS)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  3. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  4. A Global Optimization Algorithm Using Stochastic Differential Equations.

    DTIC Science & Technology

    1985-02-01

    Bari (Italy).2Istituto di Fisica , 2 UniversitA di Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La Romanina) Roma (Italy). 3Istituto di Matematica ...accompanying Algorithm. lDipartininto di Matematica , Universita di Bari, 70125 Bar (Italy). Istituto di Fisica , 2a UniversitA di Roim ’"Tor Vergata", Via...Optimization, Stochastic Differential Equations Work Unit Number 5 (Optimization and Large Scale Systems) 6Dipartimento di Matematica , Universita di Bari, 70125

  5. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  6. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    NASA Astrophysics Data System (ADS)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  7. Optimization of Power Coefficient of Wind Turbine Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rajakumar, Sappani; Ravindran, Durairaj; Sivakumar, Mahalingam; Venkatachalam, Gopalan; Muthukumar, Shunmugavelu

    2016-06-01

    In the design of a wind turbine, the goal is to attain the highest possible power output under specified atmospheric conditions. The optimization of power coefficient of horizontal axis wind turbine has been carried out by integration of blade element momentum method and genetic algorithm (GA). The design variables considered are wind velocity, angle of attack and tip speed ratio. The objective function is power coefficient of wind turbine. The different combination of design variables are optimized using GA and then the Power coefficient is optimized. The optimized design variables are validated with the experimental results available in the literature. By this optimization work the optimum design variables of wind turbine can be found economically than experimental work. NACA44XX series airfoils are considered for this optimization work.

  8. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  9. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  10. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  11. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    EPA Science Inventory

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  12. Attitude determination using vector observations - A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. L.

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  13. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  14. Optimal pulse shaping for coherent control by the penalty algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Hai; Dussault, Jean-Pièrre; Bandrauk, André D.

    1994-04-01

    We use penalty methods coupled with unitary exponential operator methods to solve the optimal control problem for molecular time-dependent Schrödinger equations involving laser pulse excitations. A stable numerical algorithm is presented which propagates directly from initial states to given final states. Results are reported for an analytically solvable model for the complete inversion of a three-state system.

  15. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  16. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  17. Optimization algorithms for large-scale multireservoir hydropower systems

    SciTech Connect

    Hiew, K.L.

    1987-01-01

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another. The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.

  18. Model updating based on an affine scaling interior optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Y. X.; Jia, C. X.; Li, Jian; Spencer, B. F.

    2013-11-01

    Finite element model updating is usually considered as an optimization process. Affine scaling interior algorithms are powerful optimization algorithms that have been developed over the past few years. A new finite element model updating method based on an affine scaling interior algorithm and a minimization of modal residuals is proposed in this article, and a general finite element model updating program is developed based on the proposed method. The performance of the proposed method is studied through numerical simulation and experimental investigation using the developed program. The results of the numerical simulation verified the validity of the method. Subsequently, the natural frequencies obtained experimentally from a three-dimensional truss model were used to update a finite element model using the developed program. After updating, the natural frequencies of the truss and finite element model matched well.

  19. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  20. Endgame implementations for the Efficient Global Optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Kaanta, Bryan

    2009-05-01

    Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency (fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We also report results using a different cost function than the one previously used [2,3].

  1. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  2. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  3. Optimal brushless DC motor design using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.

    2010-11-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.

  4. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  5. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  6. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding.

    PubMed

    Li, Linguo; Sun, Lijuan; Guo, Jian; Qi, Jin; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  7. Left ventricle segmentation in MRI via convex relaxed distribution matching.

    PubMed

    Nambakhsh, Cyrus M S; Yuan, Jing; Punithakumar, Kumaradevan; Goela, Aashish; Rajchl, Martin; Peters, Terry M; Ayed, Ismail Ben

    2013-12-01

    A fundamental step in the diagnosis of cardiovascular diseases, automatic left ventricle (LV) segmentation in cardiac magnetic resonance images (MRIs) is still acknowledged to be a difficult problem. Most of the existing algorithms require either extensive training or intensive user inputs. This study investigates fast detection of the left ventricle (LV) endo- and epicardium surfaces in cardiac MRI via convex relaxation and distribution matching. The algorithm requires a single subject for training and a very simple user input, which amounts to a single point (mouse click) per target region (cavity or myocardium). It seeks cavity and myocardium regions within each 3D phase by optimizing two functionals, each containing two distribution-matching constraints: (1) a distance-based shape prior and (2) an intensity prior. Based on a global measure of similarity between distributions, the shape prior is intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive a fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed algorithm relaxes the need for costly pose estimation (or registration) procedures and large training sets, and can tolerate shape deformations, unlike template (or atlas) based priors. Our formulation leads to a challenging problem, which is not directly amenable to convex-optimization techniques. For each functional, we split the problem into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Unlike related graph-cut approaches, the proposed convex-relaxation solution can be parallelized to reduce substantially the computational time for 3D domains (or higher), extends directly to high dimensions, and does not have the grid-bias problem. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm

  8. Automatic Treatment Planning with Convex Imputing

    NASA Astrophysics Data System (ADS)

    Sayre, G. A.; Ruan, D.

    2014-03-01

    Current inverse optimization-based treatment planning for radiotherapy requires a set of complex DVH objectives to be simultaneously minimized. This process, known as multi-objective optimization, is challenging due to non-convexity in individual objectives and insufficient knowledge in the tradeoffs among the objective set. As such, clinical practice involves numerous iterations of human intervention that is costly and often inconsistent. In this work, we propose to address treatment planning with convex imputing, a new-data mining technique that explores the existence of a latent convex objective whose optimizer reflects the DVH and dose-shaping properties of previously optimized cases. Using ten clinical prostate cases as the basis for comparison, we imputed a simple least-squares problem from the optimized solutions of the prostate cases, and show that the imputed plans are more consistent than their clinical counterparts in achieving planning goals.

  9. One algorithm for branch and bound method for solving concave optimization problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.; Korepanova, A. A.; Halilova, I. F.

    2016-11-01

    The article describes the algorithm for branch and bound method for solving the concave programming problem, which is based on the idea of similarity the necessary and sufficient conditions of optimum for the original problem and for a convex programming problem with another feasible set and reverse the sign of the objective function. To find the feasible set of the equivalent convex programming problem we construct an algorithm using the idea of the branch and bound method. We formulate various branching techniques and discusses the construction of the lower objective function evaluations for the node of the decision tree. The article discusses the results of experiments of this algorithm for some famous test problems of a particular form.

  10. An active set algorithm for treatment planning optimization.

    PubMed

    Hristov, D H; Fallone, B G

    1997-09-01

    An active set algorithm for optimization of radiation therapy dose planning by intensity modulated beams has been developed. The algorithm employs a conjugate-gradient routine for subspace minimization in order to achieve a higher rate of convergence than the widely used constrained steepest-descent method at the expense of a negligible amount of overhead calculations. The performance of the new algorithm has been compared to that of the constrained steepest-descent method for various treatment geometries and two different objectives. The active set algorithm is found to be superior to the constrained steepest descent, both in terms of its convergence properties and the residual value of the cost functions at termination. Its use can significantly accelerate the design of conformal plans with intensity modulated beams by decreasing the number of time-consuming dose calculations.

  11. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  12. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  13. A Convex Geometry-Based Blind Source Separation Method for Separating Nonnegative Sources.

    PubMed

    Yang, Zuyuan; Xiang, Yong; Rong, Yue; Xie, Kan

    2015-08-01

    This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

  14. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  15. Neural networks for convex hull computation.

    PubMed

    Leung, Y; Zhang, J S; Xu, Z B

    1997-01-01

    Computing convex hull is one of the central problems in various applications of computational geometry. In this paper, a convex hull computing neural network (CHCNN) is developed to solve the related problems in the N-dimensional spaces. The algorithm is based on a two-layered neural network, topologically similar to ART, with a newly developed adaptive training strategy called excited learning. The CHCNN provides a parallel online and real-time processing of data which, after training, yields two closely related approximations, one from within and one from outside, of the desired convex hull. It is shown that accuracy of the approximate convex hulls obtained is around O[K(-1)(N-1/)], where K is the number of neurons in the output layer of the CHCNN. When K is taken to be sufficiently large, the CHCNN can generate any accurate approximate convex hull. We also show that an upper bound exists such that the CHCNN will yield the precise convex hull when K is larger than or equal to this bound. A series of simulations and applications is provided to demonstrate the feasibility, effectiveness, and high efficiency of the proposed algorithm.

  16. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.

  17. Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng

    2009-10-01

    The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.

  18. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    PubMed

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  19. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    EPA Pesticide Factsheets

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  20. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  1. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  2. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  3. Study of sequential optimal control algorithm smart isolation structure based on Simulink-S function

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohuan; Liu, Yanhui

    2017-01-01

    The study of this paper focuses on smart isolation structure, a method for realizing structural vibration control by using Simulink simulation is proposed according to the proposed sequential optimal control algorithm. In the Simulink simulation environment, A smart isolation structure is used to compare the control effect of three algorithms, i.e., classical optimal control algorithm, linear quadratic gaussian control algorithm and sequential optimal control algorithm under the condition of sensor contaminated with noise. Simulation results show that this method can be applied to the simulation of sequential optimal control algorithm and the proposed sequential optimal control algorithm has a good ability of resisting the noise and better control efficiency.

  4. Gerrymandering and Convexity

    ERIC Educational Resources Information Center

    Hodge, Jonathan K.; Marshall, Emily; Patterson, Geoff

    2010-01-01

    Convexity-based measures of shape compactness provide an effective way to identify irregularities in congressional district boundaries. A low convexity coefficient may suggest that a district has been gerrymandered, or it may simply reflect irregularities in the corresponding state boundary. Furthermore, the distribution of population within a…

  5. Optimization of Circular Ring Microstrip Antenna Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Sathi, V.; Ghobadi, Ch.; Nourinia, J.

    2008-10-01

    Circular ring microstrip antennas have several interesting properties that make it attractive in wireless applications. Although several analysis techniques such as cavity model, generalized transmission line model, Fourier-Hankel transform domain and the method of matched asymptotic expansion have been studied by researchers, there is no efficient design tool that has been incorporated with a suitable optimization algorithm. In this paper, the cavity model analysis along with the genetic optimization algorithm is presented for the design of circular ring microstrip antennas. The method studied here is based on the well-known cavity model and the optimization of the dimensions and feed point location of the circular ring antenna is performed via the genetic optimization algorithm, to achieve an acceptable antenna operation around a desired resonance frequency. The antennas designed by this efficient design procedure were realized experimentally, and the results are compared. In addition, these results are also compared to the results obtained by the commercial electromagnetic simulation tool, the FEM based software, HFSS by ANSOFT.

  6. Facial Skin Segmentation Using Bacterial Foraging Optimization Algorithm

    PubMed Central

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-01-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%). PMID:23724370

  7. Facial skin segmentation using bacterial foraging optimization algorithm.

    PubMed

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-10-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%).

  8. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  9. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  10. Optimization of an antenna array using genetic algorithms

    SciTech Connect

    Kiehbadroudinezhad, Shahideh; Noordin, Nor Kamariah; Sali, A.; Abidin, Zamri Zainal

    2014-06-01

    An array of antennas is usually used in long distance communication. The observation of celestial objects necessitates a large array of antennas, such as the Giant Metrewave Radio Telescope (GMRT). Optimizing this kind of array is very important when observing a high performance system. The genetic algorithm (GA) is an optimization solution for these kinds of problems that reconfigures the position of antennas to increase the u-v coverage plane or decrease the sidelobe levels (SLLs). This paper presents how to optimize a correlator antenna array using the GA. A brief explanation about the GA and operators used in this paper (mutation and crossover) is provided. Then, the results of optimization are discussed. The results show that the GA provides efficient and optimum solutions among a pool of candidate solutions in order to achieve the desired array performance for the purposes of radio astronomy. The proposed algorithm is able to distribute the u-v plane more efficiently than GMRT with a more than 95% distribution ratio at snapshot, and to fill the u-v plane from a 20% to more than 68% filling ratio as the number of generations increases in the hour tracking observations. Finally, the algorithm is able to reduce the SLL to –21.75 dB.

  11. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  12. Acceleration of quantum optimal control theory algorithms with mixing strategies.

    PubMed

    Castro, Alberto; Gross, E K U

    2009-05-01

    We propose the use of mixing strategies to accelerate the convergence of the common iterative algorithms utilized in quantum optimal control theory (QOCT). We show how the nonlinear equations of QOCT can be viewed as a "fixed-point" nonlinear problem. The iterative algorithms for this class of problems may benefit from mixing strategies, as it happens, e.g., in the quest for the ground-state density in Kohn-Sham density-functional theory. We demonstrate, with some numerical examples, how the same mixing schemes utilized in this latter nonlinear problem may significantly accelerate the QOCT iterative procedures.

  13. Optimization of multicast optical networks with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng

    2007-11-01

    In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.

  14. Global structual optimizations of surface systems with a genetic algorithm

    SciTech Connect

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Aln algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  15. A proof of convergence of the concave-convex procedure using Zangwill's theory.

    PubMed

    Sriperumbudur, Bharath K; Lanckriet, Gert R G

    2012-06-01

    The concave-convex procedure (CCCP) is an iterative algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms, including sparse support vector machines (SVMs), transductive SVMs, and sparse principal component analysis. Though CCCP is widely used in many applications, its convergence behavior has not gotten a lot of specific attention. Yuille and Rangarajan analyzed its convergence in their original paper; however, we believe the analysis is not complete. The convergence of CCCP can be derived from the convergence of the d.c. algorithm (DCA), proposed in the global optimization literature to solve general d.c. programs, whose proof relies on d.c. duality. In this note, we follow a different reasoning and show how Zangwill's global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP. This underlines Zangwill's theory as a powerful and general framework to deal with the convergence issues of iterative algorithms, after also being used to prove the convergence of algorithms like expectation-maximization and generalized alternating minimization. In this note, we provide a rigorous analysis of the convergence of CCCP by addressing two questions: When does CCCP find a local minimum or a stationary point of the d.c. program under consideration? and when does the sequence generated by CCCP converge? We also present an open problem on the issue of local convergence of CCCP.

  16. Multi-objective nested algorithms for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Solomatine, Dimitri

    2016-04-01

    The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties

  17. Integer programming model for optimizing bus timetable using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  18. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  19. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  20. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  1. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO.

  2. Optimization of broadband semiconductor chirped mirrors with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dems, Maciej; Wnuk, Paweł; Wasylczyk, Piotr; Zinkiewicz, Łukasz; Wójcik-Jedlińska, Anna; Regiński, Kazimierz; Hejduk, Krzysztof; Jasik, Agata

    2016-10-01

    Genetic algorithm was applied for optimization of dispersion properties in semiconductor Bragg reflectors for applications in femtosecond lasers. Broadband, large negative group-delay dispersion was achieved in the optimized design: The group-delay dispersion (GDD) as large as -3500 fs2 was theoretically obtained over a 10-nm bandwidth. The designed structure was manufactured and tested, providing GDD -3320 fs2 over a 7-nm bandwidth. The mirror performance was verified in semiconductor structures grown with molecular beam epitaxy. The mirror was tested in a passively mode-locked Yb:KYW laser.

  3. An adaptive penalty method for DIRECT algorithm in engineering optimization

    NASA Astrophysics Data System (ADS)

    Vilaça, Rita; Rocha, Ana Maria A. C.

    2012-09-01

    The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.

  4. A unified approach via convexity for optimal energy decay rates of finite and infinite dimensional vibrating damped systems with applications to semi-discretized vibrating damped systems

    NASA Astrophysics Data System (ADS)

    Alabau-Boussouira, Fatiha

    The Liapunov method is celebrated for its strength to establish strong decay of solutions of damped equations. Extensions to infinite dimensional settings have been studied by several authors (see e.g. Haraux, 1991 [11], and Komornik and Zuazua, 1990 [17] and references therein). Results on optimal energy decay rates under general conditions of the feedback is far from being complete. The purpose of this paper is to show that general dissipative vibrating systems have structural properties due to dissipation. We present a general approach based on convexity arguments to establish sharp optimal or quasi-optimal upper energy decay rates for these systems, and on comparison principles based on the dissipation property, and interpolation inequalities (in the infinite dimensional case) for lower bounds of the energy. We stress the fact that this method works for finite as well as infinite dimensional vibrating systems and as well as for applications to semi-discretized nonlinear damped vibrating PDE's. A part of this approach has been introduced in Alabau-Boussouira (2004, 2005) [1,2]. In the present paper, we identify a new, simple and explicit criteria to select a class of nonlinear feedbacks, for which we prove a simplified explicit energy decay formula comparatively to the more general but also more complex formula we give in Alabau-Boussouira (2004, 2005) [1,2]. Moreover, we prove optimality of the decay rates for this class, in the finite dimensional case. This class includes a wide range of feedbacks, ranging from very weak nonlinear dissipation (exponentially decaying in a neighborhood of zero), to polynomial, or polynomial-logarithmic decaying feedbacks at the origin. In the infinite dimensional case, we establish a comparison principle on the energy of sufficiently smooth solutions through the dissipation relation. This principle relies on suitable interpolation inequalities. It allows us to give lower bounds for the energy of smooth initial data for the one

  5. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  6. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  7. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  8. Quantum algorithm for molecular properties and geometry optimization.

    PubMed

    Kassal, Ivan; Aspuru-Guzik, Alán

    2009-12-14

    Quantum computers, if available, could substantially accelerate quantum simulations. We extend this result to show that the computation of molecular properties (energy derivatives) could also be sped up using quantum computers. We provide a quantum algorithm for the numerical evaluation of molecular properties, whose time cost is a constant multiple of the time needed to compute the molecular energy, regardless of the size of the system. Molecular properties computed with the proposed approach could also be used for the optimization of molecular geometries or other properties. For that purpose, we discuss the benefits of quantum techniques for Newton's method and Householder methods. Finally, global minima for the proposed optimizations can be found using the quantum basin hopper algorithm, which offers an additional quadratic reduction in cost over classical multi-start techniques.

  9. A hierarchical evolutionary algorithm for multiobjective optimization in IMRT

    PubMed Central

    Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H.

    2010-01-01

    Purpose: The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. Results: The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12–15 plans, any random plan selected from a MOEA

  10. Optimizing phase-estimation algorithms for diamond spin magnetometry

    NASA Astrophysics Data System (ADS)

    Nusran, N. M.; Dutt, M. V. Gurudev

    2014-07-01

    We present a detailed theoretical and numerical study discussing the application and optimization of phase-estimation algorithms (PEAs) to diamond spin magnetometry. We compare standard Ramsey magnetometry, the nonadaptive PEA (NAPEA), and quantum PEA (QPEA) incorporating error checking. Our results show that the NAPEA requires lower measurement fidelity, has better dynamic range, and greater consistency in sensitivity. We elucidate the importance of dynamic range to Ramsey magnetic imaging with diamond spins, and introduce the application of PEAs to time-dependent magnetometry.

  11. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    SciTech Connect

    O'Leary, Dianne P.; Tits, Andre

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  12. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  13. Optimizing remediation of an unconfined aquifer using a hybrid algorithm.

    PubMed

    Hsiao, Chin-Tsai; Chang, Liang-Cheng

    2005-01-01

    We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.

  14. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  15. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  16. Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework

    SciTech Connect

    Alicia Hofler, Pavel Evtushenko, Frank Marhauser

    2009-09-01

    Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.

  17. Quantum-based algorithm for optimizing artificial neural networks.

    PubMed

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.

  18. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  19. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  20. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution.

  1. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  2. Coil optimization for electromagnetic levitation using a genetic like algorithm

    NASA Astrophysics Data System (ADS)

    Royer, Z. L.; Tackes, C.; LeSar, R.; Napolitano, R. E.

    2013-06-01

    The technique of electromagnetic levitation (EML) provides a means for thermally processing an electrically conductive specimen in a containerless manner. For the investigation of metallic liquids and related melting or freezing transformations, the elimination of substrate-induced nucleation affords access to much higher undercooling than otherwise attainable. With heating and levitation both arising from the currents induced by the coil, the performance of any EML system depends on controlling the balance between lifting forces and heating effects, as influenced by the levitation coil geometry. In this work, a genetic algorithm is developed and utilized to optimize the design of electromagnetic levitation coils. The optimization is targeted specifically to reduce the steady-state temperature of the stably levitated metallic specimen. Reductions in temperature of nominally 70 K relative to that obtained with the initial design are achieved through coil optimization, and the results are compared with experiments for aluminum. Additionally, the optimization method is shown to be robust, generating a small range of converged results from a variety of initial starting conditions. While our optimization criterion was set to achieve the lowest possible sample temperature, the method is general and can be used to optimize for other criteria as well.

  3. A heterogeneous algorithm for PDT dose optimization for prostate

    NASA Astrophysics Data System (ADS)

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S. B.; Hahn, Stephen M.

    2009-02-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT.

  4. A heterogeneous algorithm for PDT dose optimization for prostate

    PubMed Central

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S.B.; Hahn, Stephen M.

    2015-01-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT. PMID:25914793

  5. [Research on and application of hybrid optimization algorithm in Brillouin scattering spectrum parameter extraction problem].

    PubMed

    Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong

    2012-04-01

    This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.

  6. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  7. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  8. Stereotype locally convex spaces

    NASA Astrophysics Data System (ADS)

    Akbarov, S. S.

    2000-08-01

    We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.

  9. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  10. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  11. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  12. Effective multi-objective optimization with the coral reefs optimization algorithm

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Pastor-Sánchez, A.; Portilla-Figueras, J. A.; Prieto, L.

    2016-06-01

    In this article a new algorithm for multi-objective optimization is presented, the Multi-Objective Coral Reefs Optimization (MO-CRO) algorithm. The algorithm is based on the simulation of processes in coral reefs, such as corals' reproduction and fight for space in the reef. The adaptation to multi-objective problems is a process based on domination or non-domination during the process of fight for space in the reef. The final MO-CRO is an easily-implemented and fast algorithm, simple and robust, since it is able to keep diversity in the population of corals (solutions) in a natural way. The experimental evaluation of this new approach for multi-objective optimization problems is carried out on different multi-objective benchmark problems, where the MO-CRO has shown excellent performance in cases with limited computational resources, and in a real-world problem of wind speed prediction, where the MO-CRO algorithm is used to find the best set of features to predict the wind speed, taking into account two objective functions related to the performance of the prediction and the computation time of the regressor.

  13. Genetic Algorithm Optimized Triply Compensated Pulses in NMR Spectroscopy

    PubMed Central

    Manu, V. S.; Veglia, Gianluigi

    2015-01-01

    Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature’s evolutionary processes. The newly designed π and π/2 pulses belong to the ‘Type A’ (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U – 13C, 15N NAVL peptide as well as U – 13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences. PMID:26473327

  14. Genetic algorithm optimized triply compensated pulses in NMR spectroscopy.

    PubMed

    Manu, V S; Veglia, Gianluigi

    2015-11-01

    Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π/2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-(13)C, (15)N NAVL peptide as well as U-(13)C, (15)N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.

  15. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  16. An optimal algorithm for computing all subtree repeats in trees

    PubMed Central

    Flouri, T.; Kobert, K.; Pissis, S. P.; Stamatakis, A.

    2014-01-01

    Given a labelled tree T, our goal is to group repeating subtrees of T into equivalence classes with respect to their topologies and the node labels. We present an explicit, simple and time-optimal algorithm for solving this problem for unrooted unordered labelled trees and show that the running time of our method is linear with respect to the size of T. By unordered, we mean that the order of the adjacent nodes (children/neighbours) of any node of T is irrelevant. An unrooted tree T does not have a node that is designated as root and can also be referred to as an undirected tree. We show how the presented algorithm can easily be modified to operate on trees that do not satisfy some or any of the aforementioned assumptions on the tree structure; for instance, how it can be applied to rooted, ordered or unlabelled trees. PMID:24751873

  17. Convergence of the gradient projection method and Newton's method as applied to optimization problems constrained by intersection of a spherical surface and a convex closed set

    NASA Astrophysics Data System (ADS)

    Chernyaev, Yu. A.

    2016-10-01

    The gradient projection method and Newton's method are generalized to the case of nonconvex constraint sets representing the set-theoretic intersection of a spherical surface with a convex closed set. Necessary extremum conditions are examined, and the convergence of the methods is analyzed.

  18. Computational and statistical tradeoffs via convex relaxation

    PubMed Central

    Chandrasekaran, Venkat; Jordan, Michael I.

    2013-01-01

    Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources, such as time or space. Our approach to this problem is to define a notion of “algorithmic weakening,” in which a hierarchy of algorithms is ordered by both computational efficiency and statistical efficiency, allowing the growing strength of the data at scale to be traded off against the need for sophisticated processing. We illustrate this approach in the setting of denoising problems, using convex relaxation as the core inferential tool. Hierarchies of convex relaxations have been widely used in theoretical computer science to yield tractable approximation algorithms to many computationally intractable tasks. In the current paper, we show how to endow such hierarchies with a statistical characterization and thereby obtain concrete tradeoffs relating algorithmic runtime to amount of data. PMID:23479655

  19. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  20. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    Athans, M. (Editor); Willsky, A. S. (Editor)

    1982-01-01

    The analysis and design of complex multivariable reliable control systems are considered. High performance and fault tolerant aircraft systems are the objectives. A preliminary feasibility study of the design of a lateral control system for a VTOL aircraft that is to land on a DD963 class destroyer under high sea state conditions is provided. Progress in the following areas is summarized: (1) VTOL control system design studies; (2) robust multivariable control system synthesis; (3) adaptive control systems; (4) failure detection algorithms; and (5) fault tolerant optimal control theory.

  1. Population Induced Instabilities in Genetic Algorithms for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Vlachos, D. S.; Parousis-Orthodoxou, K. J.

    2013-02-01

    Evolutionary computation techniques, like genetic algorithms, have received a lot of attention as optimization techniques but, although they exhibit a very promising potential in curing the problem, they have not produced a significant breakthrough in the area of systematic treatment of constraints. There are two mainly ways of handling the constraints: the first is to produce an infeasibility measure and add it to the general cost function (the well known penalty methods) and the other is to modify the mutation and crossover operation in a way that they only produce feasible members. Both methods have their drawbacks and are strongly correlated to the problem that they are applied. In this work, we propose a different treatment of the constraints: we induce instabilities in the evolving population, in a way that infeasible solution cannot survive as they are. Preliminary results are presented in a set of well known from the literature constrained optimization problems.

  2. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  3. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2014-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  4. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2015-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  5. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  6. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.

    PubMed

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-04-17

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.

  7. Optimizing quantum gas production by an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Lausch, T.; Hohmann, M.; Kindermann, F.; Mayer, D.; Schmidt, F.; Widera, A.

    2016-05-01

    We report on the application of an evolutionary algorithm (EA) to enhance performance of an ultra-cold quantum gas experiment. The production of a ^{87}rubidium Bose-Einstein condensate (BEC) can be divided into fundamental cooling steps, specifically magneto-optical trapping of cold atoms, loading of atoms to a far-detuned crossed dipole trap, and finally the process of evaporative cooling. The EA is applied separately for each of these steps with a particular definition for the feedback, the so-called fitness. We discuss the principles of an EA and implement an enhancement called differential evolution. Analyzing the reasons for the EA to improve, e.g., the atomic loading rates and increase the BEC phase-space density, yields an optimal parameter set for the BEC production and enables us to reduce the BEC production time significantly. Furthermore, we focus on how additional information about the experiment and optimization possibilities can be extracted and how the correlations revealed allow for further improvement. Our results illustrate that EAs are powerful optimization tools for complex experiments and exemplify that the application yields useful information on the dependence of these experiments on the optimized parameters.

  8. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  9. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  10. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    SciTech Connect

    Rogers, Adam; Fiege, Jason D.

    2011-02-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  11. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints.

    PubMed

    Liu, Ryan Wen; Shi, Lin; Yu, Simon Chun Ho; Xiong, Naixue; Wang, Defeng

    2017-03-03

    Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.

  12. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints

    PubMed Central

    Liu, Ryan Wen; Shi, Lin; Yu, Simon Chun Ho; Xiong, Naixue; Wang, Defeng

    2017-01-01

    Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments. PMID:28273827

  13. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  14. An adaptive /N-body algorithm of optimal order

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Rudmin, Joseph W.; Lacy, Justin M.

    2003-05-01

    Picard iteration is normally considered a theoretical tool whose primary utility is to establish the existence and uniqueness of solutions to first-order systems of ordinary differential equations (ODEs). However, in 1996, Parker and Sochacki [Neural, Parallel, Sci. Comput. 4 (1996)] published a practical numerical method for a certain class of ODEs, based upon modified Picard iteration, that generates the Maclaurin series of the solution to arbitrarily high order. The applicable class of ODEs consists of first-order, autonomous systems whose right-hand side functions (generators) are projectively polynomial; that is, they can be written as polynomials in the unknowns. The class is wider than might be expected. The method is ideally suited to the classical N-body problem, which is projectively polynomial. Here, we recast the N-body problem in polynomial form and develop a Picard-based algorithm for its solution. The algorithm is highly accurate, parameter-free, and simultaneously adaptive in time and order. Test cases for both benign and chaotic N-body systems reveal that optimal order is dynamic. That is, in addition to dependency upon N and the desired accuracy, optimal order depends upon the configuration of the bodies at any instant.

  15. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  16. Quantum-inspired immune clonal algorithm for global optimization.

    PubMed

    Jiao, Licheng; Li, Yangyang; Gong, Maoguo; Zhang, Xiangrong

    2008-10-01

    Based on the concepts and principles of quantum computing, a novel immune clonal algorithm, called a quantum-inspired immune clonal algorithm (QICA), is proposed to deal with the problem of global optimization. In QICA, the antibody is proliferated and divided into a set of subpopulation groups. The antibodies in a subpopulation group are represented by multistate gene quantum bits. In the antibody's updating, the general quantum rotation gate strategy and the dynamic adjusting angle mechanism are applied to accelerate convergence. The quantum not gate is used to realize quantum mutation to avoid premature convergences. The proposed quantum recombination realizes the information communication between subpopulation groups to improve the search efficiency. Theoretical analysis proves that QICA converges to the global optimum. In the first part of the experiments, 10 unconstrained and 13 constrained benchmark functions are used to test the performance of QICA. The results show that QICA performs much better than the other improved genetic algorithms in terms of the quality of solution and computational cost. In the second part of the experiments, QICA is applied to a practical problem (i.e., multiuser detection in direct-sequence code-division multiple-access systems) with a satisfying result.

  17. Constant-complexity stochastic simulation algorithm with optimal binning.

    PubMed

    Sanft, Kevin R; Othmer, Hans G

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  18. GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.

    SciTech Connect

    D'Helon, CD

    2004-08-18

    The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.

  19. In-Space Radiator Shape Optimization using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael

    2006-01-01

    Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in

  20. Optimal sliding guidance algorithm for Mars powered descent phase

    NASA Astrophysics Data System (ADS)

    Wibben, Daniel R.; Furfaro, Roberto

    2016-02-01

    Landing on large planetary bodies (e.g. Mars) with pinpoint accuracy presents a set of new challenges that must be addressed. One such challenge is the development of new guidance algorithms that exhibit a higher degree of robustness and flexibility. In this paper, the Zero-Effort-Miss/Zero-Effort-Velocity (ZEM/ZEV) optimal sliding guidance (OSG) scheme is applied to the Mars powered descent phase. This guidance algorithm has been specifically designed to combine techniques from both optimal and sliding control theories to generate an acceleration command based purely on the current estimated spacecraft state and desired final target state. Consequently, OSG yields closed-loop trajectories that do not need a reference trajectory. The guidance algorithm has its roots in the generalized ZEM/ZEV feedback guidance and its mathematical equations are naturally derived by defining a non-linear sliding surface as a function of the terms Zero-Effort-Miss and Zero-Effort-Velocity. With the addition of the sliding mode and using Lyapunov theory for non-autonomous systems, one can formally prove that the developed OSG law is globally finite-time stable to unknown but bounded perturbations. Here, the focus is on comparing the generalized ZEM/ZEV feedback guidance with the OSG law to explicitly demonstrate the benefits of the sliding mode augmentation. Results show that the sliding guidance provides a more robust solution in off-nominal scenarios while providing similar fuel consumption when compared to the non-sliding guidance command. Further, a Monte Carlo analysis is performed to examine the performance of the OSG law under perturbed conditions.

  1. A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems.

    PubMed

    Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui

    2010-02-01

    In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.

  2. Parallel global optimization with the particle swarm algorithm.

    PubMed

    Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D

    2004-12-07

    Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available.

  3. Robust Utility Maximization Under Convex Portfolio Constraints

    SciTech Connect

    Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  4. A comparison of three optimization algorithms for intensity modulated radiation therapy.

    PubMed

    Pflugfelder, Daniel; Wilkens, Jan J; Nill, Simeon; Oelfke, Uwe

    2008-01-01

    In intensity modulated treatment techniques, the modulation of each treatment field is obtained using an optimization algorithm. Multiple optimization algorithms have been proposed in the literature, e.g. steepest descent, conjugate gradient, quasi-Newton methods to name a few. The standard optimization algorithm in our in-house inverse planning tool KonRad is a quasi-Newton algorithm. Although this algorithm yields good results, it also has some drawbacks. Thus we implemented an improved optimization algorithm based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) routine. In this paper the improved optimization algorithm is described. To compare the two algorithms, several treatment plans are optimized using both algorithms. This included photon (IMRT) as well as proton (IMPT) intensity modulated therapy treatment plans. To present the results in a larger context the widely used conjugate gradient algorithm was also included into this comparison. On average, the improved optimization algorithm was six times faster to reach the same objective function value. However, it resulted not only in an acceleration of the optimization. Due to the faster convergence, the improved optimization algorithm usually terminates the optimization process at a lower objective function value. The average of the observed improvement in the objective function value was 37%. This improvement is clearly visible in the corresponding dose-volume-histograms. The benefit of the improved optimization algorithm is particularly pronounced in proton therapy plans. The conjugate gradient algorithm ranked in between the other two algorithms with an average speedup factor of two and an average improvement of the objective function value of 30%.

  5. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  6. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  7. Optimization of heterogeneous Bin packing using adaptive genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sridhar, R.; Chandrasekaran, M.; Sriramya, C.; Page, Tom

    2017-03-01

    This research is concentrates on a very interesting work, the bin packing using hybrid genetic approach. The optimal and feasible packing of goods for transportation and distribution to various locations by satisfying the practical constraints are the key points in this project work. As the number of boxes for packing can not be predicted in advance and the boxes may not be of same category always. It also involves many practical constraints that are why the optimal packing makes much importance to the industries. This work presents a combinational of heuristic Genetic Algorithm (HGA) for solving Three Dimensional (3D) Single container arbitrary sized rectangular prismatic bin packing optimization problem by considering most of the practical constraints facing in logistic industries. This goal was achieved in this research by optimizing the empty volume inside the container using genetic approach. Feasible packing pattern was achieved by satisfying various practical constraints like box orientation, stack priority, container stability, weight constraint, overlapping constraint, shipment placement constraint. 3D bin packing problem consists of ‘n’ number of boxes being to be packed in to a container of standard dimension in such a way to maximize the volume utilization and in-turn profit. Furthermore, Boxes to be packed may be of arbitrary sizes. The user input data are the number of bins, its size, shape, weight, and constraints if any along with standard container dimension. This user input were stored in the database and encoded to string (chromosomes) format which were normally acceptable by GA. GA operators were allowed to act over these encoded strings for finding the best solution.

  8. GenMin: An enhanced genetic algorithm for global optimization

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, I. E.

    2008-06-01

    A new method that employs grammatical evolution and a stopping rule for finding the global minimum of a continuous multidimensional, multimodal function is considered. The genetic algorithm used is a hybrid genetic algorithm in conjunction with a local search procedure. We list results from numerical experiments with a series of test functions and we compare with other established global optimization methods. The accompanying software accepts objective functions coded either in Fortran 77 or in C++. Program summaryProgram title: GenMin Catalogue identifier: AEAR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 810 No. of bytes in distributed program, including test data, etc.: 436 613 Distribution format: tar.gz Programming language: GNU-C++, GNU-C, GNU Fortran 77 Computer: The tool is designed to be portable in all systems running the GNU C++ compiler Operating system: The tool is designed to be portable in all systems running the GNU C++ compiler RAM: 200 KB Word size: 32 bits Classification: 4.9 Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a least squares type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Solution method: Grammatical evolution and a stopping rule. Running time: Depending on the

  9. An optimization-based iterative algorithm for recovering fluorophore location

    NASA Astrophysics Data System (ADS)

    Yi, Huangjian; Peng, Jinye; Jin, Chen; He, Xiaowei

    2015-10-01

    Fluorescence molecular tomography (FMT) is a non-invasive technique that allows three-dimensional visualization of fluorophore in vivo in small animals. In practical applications of FMT, however, there are challenges in the image reconstruction since it is a highly ill-posed problem due to the diffusive behaviour of light transportation in tissue and the limited measurement data. In this paper, we presented an iterative algorithm based on an optimization problem for three dimensional reconstruction of fluorescent target. This method alternates weighted algebraic reconstruction technique (WART) with steepest descent method (SDM) for image reconstruction. Numerical simulations experiments and physical phantom experiment are performed to validate our method. Furthermore, compared to conjugate gradient method, the proposed method provides a better three-dimensional (3D) localization of fluorescent target.

  10. Chiral metamaterial design using optimized pixelated inclusions with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Akturk, Cemal; Karaaslan, Muharrem; Ozdemir, Ersin; Ozkaner, Vedat; Dincer, Furkan; Bakir, Mehmet; Ozer, Zafer

    2015-03-01

    Chiral metamaterials have been a research area for many researchers due to their polarization rotation properties on electromagnetic waves. However, most of the proposed chiral metamaterials are designed depending on experience or time-consuming inefficient simulations. A method is investigated for designing a chiral metamaterial with a strong and natural chirality admittance by optimizing a grid of metallic pixels through both sides of a dielectric sheet placed perpendicular to the incident wave by using a genetic algorithm (GA) technique based on finite element method solver. The effective medium parameters are obtained by using constitutive equations and S parameters. The proposed methodology is very efficient for designing a chiral metamaterial with the desired effective medium parameters. By using GA-based topology, it is proven that a chiral metamaterial can be designed and manufactured more easily and with a low cost.

  11. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  12. Convex weighting criteria for speaking rate estimation

    PubMed Central

    Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie

    2015-01-01

    Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516

  13. Convex recoloring as an evolutionary marker.

    PubMed

    Frenkel, Zeev; Kiat, Yosef; Izhaki, Ido; Snir, Sagi

    2017-02-01

    With the availability of enormous quantities of genetic data it has become common to construct very accurate trees describing the evolutionary history of the species under study, as well as every single gene of these species. These trees allow us to examine the evolutionary compliance of given markers (characters). A marker compliant with the history of the species investigated, has undergone mutations along the species tree branches, such that every subtree of that tree exhibits a different state. Convex recoloring (CR) uses combinatorial representation to measure the adequacy of a taxonomic classifier to a given tree. Despite its biological origins, research on CR has been almost exclusively dedicated to mathematical properties of the problem, or variants of it with little, if any, relationship to taxonomy. In this work we return to the origins of CR. We put CR in a statistical framework and introduce and learn the notion of the statistical significance of a character. We apply this measure to two data sets - Passerine birds and prokaryotes, and four examples. These examples demonstrate various applications of CR, from evolutionary relatedness, through lateral evolution, to supertree construction. The above study was done with a new software that we provide, containing algorithmic improvement with a graphical output of a (optimally) recolored tree.

  14. A comparison of various optimization algorithms of protein-ligand docking programs by fitness accuracy.

    PubMed

    Guo, Liyong; Yan, Zhiqiang; Zheng, Xiliang; Hu, Liang; Yang, Yongliang; Wang, Jin

    2014-07-01

    In protein-ligand docking, an optimization algorithm is used to find the best binding pose of a ligand against a protein target. This algorithm plays a vital role in determining the docking accuracy. To evaluate the relative performance of different optimization algorithms and provide guidance for real applications, we performed a comparative study on six efficient optimization algorithms, containing two evolutionary algorithm (EA)-based optimizers (LGA, DockDE) and four particle swarm optimization (PSO)-based optimizers (SODock, varCPSO, varCPSO-ls, FIPSDock), which were implemented into the protein-ligand docking program AutoDock. We unified the objective functions by applying the same scoring function, and built a new fitness accuracy as the evaluation criterion that incorporates optimization accuracy, robustness, and efficiency. The varCPSO and varCPSO-ls algorithms show high efficiency with fast convergence speed. However, their accuracy is not optimal, as they cannot reach very low energies. SODock has the highest accuracy and robustness. In addition, SODock shows good performance in efficiency when optimizing drug-like ligands with less than ten rotatable bonds. FIPSDock shows excellent robustness and is close to SODock in accuracy and efficiency. In general, the four PSO-based algorithms show superior performance than the two EA-based algorithms, especially for highly flexible ligands. Our method can be regarded as a reference for the validation of new optimization algorithms in protein-ligand docking.

  15. Convex Formulations of Learning from Crowds

    NASA Astrophysics Data System (ADS)

    Kajino, Hiroshi; Kashima, Hisashi

    It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.

  16. Multi-label Moves for MRFs with Truncated Convex Priors

    NASA Astrophysics Data System (ADS)

    Veksler, Olga

    Optimization with graph cuts became very popular in recent years. As more applications rely on graph cuts, different energy functions are being employed. Recent evaluation of optimization algorithms showed that the widely used swap and expansion graph cut algorithms have an excellent performance for energies where the underlying MRF has Potts prior. Potts prior corresponds to assuming that the true labeling is piecewise constant. While surprisingly useful in practice, Potts prior is clearly not appropriate in many circumstances. However for more general priors, the swap and expansion algorithms do not perform as well. Both algorithms are based on moves that give each pixel a choice of only two labels. Therefore such moves can be referred to as binary moves. Recently, range moves that act on multiple labels simultaneously were introduced. As opposed to swap and expansion, each pixel has a choice of more than two labels in a range move. Therefore we call them multi-label moves. Range moves were shown to work better for problems with truncated convex priors, which imply a piecewise smooth labeling. Inspired by range moves, we develop several different variants of multi-label moves. We evaluate them on the problem of stereo correspondence and discuss their relative merits.

  17. Source mask optimization using real-coded genetic algorithms

    NASA Astrophysics Data System (ADS)

    Yang, Chaoxing; Wang, Xiangzhao; Li, Sikun; Erdmann, Andreas

    2013-04-01

    Source mask optimization (SMO) is considered to be one of the technologies to push conventional 193nm lithography to its ultimate limits. In comparison with other SMO methods that use an inverse problem formulation, SMO based on genetic algorithm (GA) requires very little knowledge of the process, and has the advantage of flexible problem formulation. Recent publications on SMO using a GA employ a binary-coded GA. In general, the performance of a GA depends not only on the merit or fitness function, but also on the parameters, operators and their algorithmic implementation. In this paper, we propose a SMO method using real-coded GA where the source and mask solutions are represented by floating point strings instead of bit strings. Besides from that, the selection, crossover, and mutation operators are replaced by corresponding floating-point versions. Both binary-coded and real-coded genetic algorithms were implemented in two versions of SMO and compared in numerical experiments, where the target patterns are staggered contact holes and a logic pattern with critical dimensions of 100 nm, respectively. The results demonstrate the performance improvement of the real-coded GA in comparison to the binary-coded version. Specifically, these improvements can be seen in a better convergence behavior. For example, the numerical experiments for the logic pattern showed that the average number of generations to converge to a proper fitness of 6.0 using the real-coded method is 61.8% (100 generations) less than that using binary-coded method.

  18. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.

    PubMed

    Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.

  19. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm

    PubMed Central

    Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945

  20. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  1. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    SciTech Connect

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadratic programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).

  2. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  3. An Ant Colony Optimization and Hybrid Metaheuristics Algorithm to Solve the Split Delivery Vehicle Routing Problem

    DTIC Science & Technology

    2015-01-01

    Optimization and 2) hybrid metaheuristics algorithm comprising a combination of ACO, Genetic Algorithm (GA) and heuristics are proposed and tested on...Optimization, Split Delivery Vehicle Routing Problem, Genetic Algorithm 1. Introduction The Vehicle Routing Problem (VRP) is a prominent problem in the areas...several heuristic methods have been applied to solve the SDVRP, such as a construction heuristic (Wilck and Cavalier, 2012a), a genetic algorithm (Wilck

  4. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  5. Aerodynamics Design and Genetic Algorithms for Optimization of Airship Bodies

    NASA Astrophysics Data System (ADS)

    Nejati, Vahid; Matsuuchi, Kazuo

    A special and effective aerodynamics calculation method has been applied for the flow field around a body of revolution to find the drag coefficient for a wide range of Reynolds numbers. The body profile is described by a first order continuous axial singularity distribution. The solution of the direct problem then gives the radius and inviscid velocity distribution. Viscous effects are considered by means of an integral boundary layer procedure, and for determination of the transition location the forced transition criterion is applied. By avoiding those profiles, which result in the separation of the boundary layer, the drag can be calculated at the end of the body by using Young's formula. In this study, a powerful optimization procedure known as a Genetic Algorithms (GA) is used for the first time in the shape optimization of airship hulls. GA represents a particular artificial intelligence technique for large spaces, striking a remarkable balance between exploration and exploitation of search space. This method could reach to minimum objective function through a better path, and also could minimize the drag coefficient faster for different Reynolds number regimes. It was found that GA is a powerful method for such multi-dimensional, multi-modal and nonlinear objective function.

  6. Stochastic optimization algorithm for inverse modeling of air pollution

    NASA Astrophysics Data System (ADS)

    Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant

    2016-11-01

    A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.

  7. Development and applications of various optimization algorithms for diesel engine combustion and emissions optimization

    NASA Astrophysics Data System (ADS)

    Ogren, Ryan M.

    For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.

  8. Optimizations Of Coat-Hanger Die, Using Constraint Optimization Algorithm And Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lebaal, Nadhir; Schmidt, Fabrice; Puissant, Stephan

    2007-05-01

    Polymer extrusion is one of the most important manufacturing methods used today. A flat die, is commonly used to extrude thin thermoplastics sheets. If the channel geometry in a flat die is not designed properly, the velocity at the die exit may be perturbed, which can affect the thickness across the width of the die. The ultimate goal of this work is to optimize the die channel geometry in a way that a uniform velocity distribution is obtained at the die exit. While optimizing the exit velocity distribution, we have coupled three-dimensional extrusion simulation software Rem3D®, with an automatic constraint optimization algorithm to control the maximum allowable pressure drop in the die; according to this constraint we can control the pressure in the die (decrease the pressure while minimizing the velocity dispersion across the die exit). For this purpose, we investigate the effect of the design variables in the objective and constraint function by using Taguchi method. In the second study we use the global response surface method with Kriging interpolation to optimize flat die geometry. Two optimization results are presented according to the imposed constraint on the pressure. The optimum is obtained with a very fast convergence (2 iterations). To respect the constraint while ensuring a homogeneous distribution of velocity, the results with a less severe constraint offers the best minimum.

  9. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  10. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  11. Optimized Uncertainty Quantification Algorithm Within a Dynamic Event Tree Framework

    SciTech Connect

    J. W. Nielsen; Akira Tokuhiro; Robert Hiromoto

    2014-06-01

    Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-to-state of the entire system. This paper presents a method for performing QPIRT within a dynamic event tree framework such that timing events which result in the highest probabilities of failure are captured and a QPIRT is performed simultaneously while performing a discrete dynamic event tree evaluation. The resulting simulation results in a formal QPIRT for each end state. The use of dynamic event trees results in state explosion as the number of possible component states increases. This paper utilizes a branch and bound algorithm to optimize the solution of the dynamic event trees. The paper summarizes the methods used to implement the branch-and-bound algorithm in solving the discrete dynamic event trees.

  12. The optimal extraction of feature algorithm based on KAZE

    NASA Astrophysics Data System (ADS)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  13. Convex Graph Invariants

    DTIC Science & Technology

    2010-12-02

    evaluating the function ΘP (A) for any fixed A,P is equivalent to solving the so-called Quadratic Assignment Problem ( QAP ), and thus we can employ various...tractable linear programming, spectral, and SDP relaxations of QAP [40, 11, 33]. In particular we discuss recent work [14] on exploiting group...symmetry in SDP relaxations of QAP , which is useful for approximately computing elementary convex graph invariants in many interesting cases. Finally in

  14. SOPRA: Scaffolding algorithm for paired reads via statistical optimization

    PubMed Central

    2010-01-01

    Background High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. Results We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various

  15. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and –delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  16. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  17. A hybrid approach using chaotic dynamics and global search algorithms for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Igeta, Hideki; Hasegawa, Mikio

    Chaotic dynamics have been effectively applied to improve various heuristic algorithms for combinatorial optimization problems in many studies. Currently, the most used chaotic optimization scheme is to drive heuristic solution search algorithms applicable to large-scale problems by chaotic neurodynamics including the tabu effect of the tabu search. Alternatively, meta-heuristic algorithms are used for combinatorial optimization by combining a neighboring solution search algorithm, such as tabu, gradient, or other search method, with a global search algorithm, such as genetic algorithms (GA), ant colony optimization (ACO), or others. In these hybrid approaches, the ACO has effectively optimized the solution of many benchmark problems in the quadratic assignment problem library. In this paper, we propose a novel hybrid method that combines the effective chaotic search algorithm that has better performance than the tabu search and global search algorithms such as ACO and GA. Our results show that the proposed chaotic hybrid algorithm has better performance than the conventional chaotic search and conventional hybrid algorithms. In addition, we show that chaotic search algorithm combined with ACO has better performance than when combined with GA.

  18. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  19. Optimization on robot arm machining by using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Tung-Kuan; Chen, Chiu-Hung; Tsai, Shang-En

    2007-12-01

    In this study, an optimization problem on the robot arm machining is formulated and solved by using genetic algorithms (GAs). The proposed approach adopts direct kinematics model and utilizes GA's global search ability to find the optimum solution. The direct kinematics equations of the robot arm are formulated and can be used to compute the end-effector coordinates. Based on these, the objective of optimum machining along a set of points can be evolutionarily evaluated with the distance between machining points and end-effector positions. Besides, a 3D CAD application, CATIA, is used to build up the 3D models of the robot arm, work-pieces and their components. A simulated experiment in CATIA is used to verify the computation results first and a practical control on the robot arm through the RS232 port is also performed. From the results, this approach is proved to be robust and can be suitable for most machining needs when robot arms are adopted as the machining tools.

  20. Optimizing the lithography model calibration algorithms for NTD process

    NASA Astrophysics Data System (ADS)

    Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2016-03-01

    As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.

  1. Optimal placement of active material actuators using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Johnson, Terrence; Frecker, Mary I.

    2004-07-01

    Actuators based on smart materials generally exhibit a tradeoff between force and stroke. Researchers have surrounded piezoelectric materials (PZT"s) with complaint structures to magnify either their geometric or mechanical advantage. Most of these designs are literally built around a particular piezoelectric device, so the design space consists of only the compliant mechanism. Materials scientists researchers have demonstrated the ability to pole a PZT in an arbitrary direction, and some engineers have taken advantage of this to build "shear mode" actuators. The goal of this work is to determine if the performance of compliant mechanisms improves by the inclusion of the piezoelectric polarization as a design variable. The polarization vector is varied via transformation matrixes, and the compliant actuator is modeled using the SIMP (Solid Isotropic Material with Penalization) or "power-law method." The concept of mutual potential energy is used to form an objective function to measure the piezoelectric actuator"s performance. The optimal topology of the compliant mechanism and orientation of the polarization method are determined using a sequential linear programming algorithm. This paper presents a demonstration problem that shows small changes in the polarization vector have a marginal effect on the optimum topology of the mechanism, but improves actuation.

  2. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    ERIC Educational Resources Information Center

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  3. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  4. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  5. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  6. Convex relaxations for gas expansion planning

    SciTech Connect

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; Hijazi, Hassan; Van Hentenryck, Pascal

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutions to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution

  7. Convex relaxations for gas expansion planning

    DOE PAGES

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less

  8. Performance Analysis of Particle Swarm Optimization Based Routing Algorithm in Optical Burst Switching Networks

    NASA Astrophysics Data System (ADS)

    Hou, Rui; Yu, Junle

    2011-12-01

    Optical burst switching (OBS) has been regarded as the next generation optical switching technology. In this paper, the routing problem based on particle swarm optimization (PSO) algorithm in OBS has been studies and analyzed. Simulation results indicate that, the PSO based routing algorithm will optimal than the conversional shortest path first algorithm in space cost and calculation cost. Conclusions have certain theoretical significances for the improvement of OBS routing protocols.

  9. Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation

    NASA Technical Reports Server (NTRS)

    Mook, D. J.; Lew, Jiann-Shiun

    1991-01-01

    Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.

  10. Bio-inspired optimization algorithms for optical parameter extraction of dielectric materials: A comparative study

    NASA Astrophysics Data System (ADS)

    Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul

    2016-10-01

    Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.

  11. Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2001-01-01

    A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.

  12. Optimum design of antennas using metamaterials with the efficient global optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Derov, John S.

    2010-04-01

    EGO is an evolutionary, data-adaptive algorithm which can be useful for optimization problems with expensive cost functions. Many antenna design problems qualify since complex computational electromagnetics (CEM) simulations can take significant resources. This makes evolutionary algorithms such as genetic algorithms (GA) or particle swarm optimization (PSO) problematic since iterations of large populations are required. In this paper we discuss multiparameter optimization of a wideband, single-element antenna over a metamaterial ground plane and the interfacing of EGO (optimization) with a full-wave CEM simulation (cost function evaluation).

  13. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm.

    PubMed

    Yoshimaru, Eriko S; Randtke, Edward A; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.

  14. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm

    PubMed Central

    Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio

    2016-01-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners. PMID:26778301

  15. Design and optimization of pulsed Chemical Exchange Saturation Transfer MRI using a multiobjective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; Cárdenas-Rodríguez, Julio

    2016-02-01

    Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.

  16. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    PubMed Central

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  17. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    PubMed

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  18. Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm

    NASA Astrophysics Data System (ADS)

    Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda

    2016-06-01

    Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.

  19. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  20. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  1. Optimization and Improvement of FOA Corner Cube Algorithm

    SciTech Connect

    McClay, W A; Awwal, A S; Burkhart, S C; Candy, J V

    2004-10-01

    Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.

  2. A new multiobjective performance criterion used in PID tuning optimization algorithms

    PubMed Central

    Sahib, Mouayad A.; Ahmed, Bestoun S.

    2015-01-01

    In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978

  3. Compact and efficient large cross-section SOI rib waveguide taper optimized by a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yujin; Wang, Xi; Dong, Ying; Wang, Xiaohao

    2016-01-01

    A genetic algorithm is applied to optimize a taper between a large cross-section silicon-on-insulator (SOI) rib waveguide and a single-mode fiber to achieve an ultra-compact and highly efficient coupling structure. The coupling efficiency is taken as the objective function of the genetic algorithm in the taper optimization process. To apply the optimization algorithm, the taper is segmented into several sections. Three encoding forms and a two-step optimization strategy are adopted in the optimization process, resulting in a 10μm long taper with a coupling efficiency of 93.30% in quasi-TE mode at 1550nm. The characteristics of the optimized taper including the field profile, spectrum and fabrication tolerances in both horizontal and vertical directions are investigated via a three dimensional eigenmode expansion (EME) method, indicating that the optimized taper is compatible with the prevailing integrated circuit (IC) processing technology.

  4. A new multiobjective performance criterion used in PID tuning optimization algorithms.

    PubMed

    Sahib, Mouayad A; Ahmed, Bestoun S

    2016-01-01

    In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions.

  5. Dynamic topology multi force particle swarm optimization algorithm and its application

    NASA Astrophysics Data System (ADS)

    Chen, Dongning; Zhang, Ruixing; Yao, Chengyu; Zhao, Zheyu

    2016-01-01

    Particle swarm optimization (PSO) algorithm is an effective bio-inspired algorithm but it has shortage of premature convergence. Researchers have made some improvements especially in force rules and population topologies. However, the current algorithms only consider a single kind of force rules and lack consideration of comprehensive improvement in both multi force rules and population topologies. In this paper, a dynamic topology multi force particle swarm optimization (DTMFPSO) algorithm is proposed in order to get better search performance. First of all, the principle of the presented multi force particle swarm optimization (MFPSO) algorithm is that different force rules are used in different search stages, which can balance the ability of global and local search. Secondly, a fitness-driven edge-changing (FE) topology based on the probability selection mechanism of roulette method is designed to cut and add edges between the particles, and the DTMFPSO algorithm is proposed by combining the FE topology with the MFPSO algorithm through concurrent evolution of both algorithm and structure in order to further improve the search accuracy. Thirdly, Benchmark functions are employed to evaluate the performance of the DTMFPSO algorithm, and test results show that the proposed algorithm is better than the well-known PSO algorithms, such as µPSO, MPSO, and EPSO algorithms. Finally, the proposed algorithm is applied to optimize the process parameters for ultrasonic vibration cutting on SiC wafer, and the surface quality of the SiC wafer is improved by 12.8% compared with the PSO algorithm in Ref. [25]. This research proposes a DTMFPSO algorithm with multi force rules and dynamic population topologies evolved simultaneously, and it has better search performance.

  6. Iterative optimization algorithm with parameter estimation for the ambulance location problem.

    PubMed

    Kim, Sun Hoon; Lee, Young Hoon

    2016-12-01

    The emergency vehicle location problem to determine the number of ambulance vehicles and their locations satisfying a required reliability level is investigated in this study. This is a complex nonlinear issue involving critical decision making that has inherent stochastic characteristics. This paper studies an iterative optimization algorithm with parameter estimation to solve the emergency vehicle location problem. In the suggested algorithm, a linear model determines the locations of ambulances, while a hypercube simulation is used to estimate and provide parameters regarding ambulance locations. First, we suggest an iterative hypercube optimization algorithm in which interaction parameters and rules for the hypercube and optimization are identified. The interaction rules employed in this study enable our algorithm to always find the locations of ambulances satisfying the reliability requirement. We also propose an iterative simulation optimization algorithm in which the hypercube method is replaced by a simulation, to achieve computational efficiency. The computational experiments show that the iterative simulation optimization algorithm performs equivalently to the iterative hypercube optimization. The suggested algorithms are found to outperform existing algorithms suggested in the literature.

  7. Optimization of meander line antennas for RFID applications by using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bucuci, Stefania C.; Anchidin, Liliana; Dumitrascu, Ana; Danisor, Alin; Berescu, Serban; Tamas, Razvan D.

    2015-02-01

    In this paper, we propose an approach of optimization of meander line antennas by using genetic algorithm. Such antennas are used in RFID applications. As opposed to other approaches for meander antennas, we propose the use of only two optimization objectives, i.e. gain and size. As an example, we have optimized a single meander dipole antenna, resonating at 869 MHz.

  8. Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.

    PubMed

    Rani, R Ranjani; Ramyachitra, D

    2016-12-01

    Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods.

  9. Optimized feed-forward neural-network algorithm trained for cyclotron-cavity modeling

    NASA Astrophysics Data System (ADS)

    Mohamadian, Masoumeh; Afarideh, Hossein; Ghergherehchi, Mitra

    2017-01-01

    The cyclotron cavity presented in this paper is modeled by a feed-forward neural network trained by the authors’ optimized back-propagation (BP) algorithm. The training samples were obtained from simulation results that are for a number of defined situations and parameters and were achieved parametrically using MWS CST software; furthermore, the conventional BP algorithm with different hidden-neuron numbers, structures, and other optimal parameters such as learning rate that are applied for our purpose was also used here. The present study shows that an optimized FFN can be used to estimate the cyclotron-model parameters with an acceptable error function. A neural network trained by an optimized algorithm therefore shows a proper approximation and an acceptable ability regarding the modeling of the proposed structure. The cyclotron-cavity parameter-modeling results demonstrate that an FNN that is trained by the optimized algorithm could be a suitable method for the estimation of the design parameters in this case.

  10. Partially constrained ant colony optimization algorithm for the solution of constrained optimization problems: Application to storm water network design

    NASA Astrophysics Data System (ADS)

    Afshar, M. H.

    2007-04-01

    This paper exploits the unique feature of the Ant Colony Optimization Algorithm (ACOA), namely incremental solution building mechanism, to develop partially constraint ACO algorithms for the solution of optimization problems with explicit constraints. The method is based on the provision of a tabu list for each ant at each decision point of the problem so that some constraints of the problem are satisfied. The application of the method to the problem of storm water network design is formulated and presented. The network nodes are considered as the decision points and the nodal elevations of the network are used as the decision variables of the optimization problem. Two partially constrained ACO algorithms are formulated and applied to a benchmark example of storm water network design and the results are compared with those of the original unconstrained algorithm and existing methods. In the first algorithm the positive slope constraints are satisfied explicitly and the rest are satisfied by using the penalty method while in the second one the satisfaction of constraints regarding the maximum ratio of flow depth to the diameter are also achieved explicitly via the tabu list. The method is shown to be very effective and efficient in locating the optimal solutions and in terms of the convergence characteristics of the resulting ACO algorithms. The proposed algorithms are also shown to be relatively insensitive to the initial colony used compared to the original algorithm. Furthermore, the method proves itself capable of finding an optimal or near-optimal solution, independent of the discretisation level and the size of the colony used.

  11. MULTI-OBJECTIVE OPTIMAL DESIGN OF GROUNDWATER REMEDIATION SYSTEMS: APPLICATION OF THE NICHED PARETO GENETIC ALGORITHM (NPGA). (R826614)

    EPA Science Inventory

    A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...

  12. Revolute manipulator workspace optimization using a modified bacteria foraging algorithm: A comparative study

    NASA Astrophysics Data System (ADS)

    Panda, S.; Mishra, D.; Biswal, B. B.; Tripathy, M.

    2014-02-01

    Robotic manipulators with three-revolute (3R) motions to attain desired positional configurations are very common in industrial robots. The capability of these robots depends largely on the workspace of the manipulator in addition to other parameters. In this study, an evolutionary optimization algorithm based on the foraging behaviour of the Escherichia coli bacteria present in the human intestine is utilized to optimize the workspace volume of a 3R manipulator. The new optimization method is modified from the original algorithm for faster convergence. This method is also useful for optimization problems in a highly constrained environment, such as robot workspace optimization. The new approach for workspace optimization of 3R manipulators is tested using three cases. The test results are compared with standard results available using other optimization algorithms, i.e. the differential evolution algorithm, the genetic algorithm and the particle swarm optimization algorithm. The present method is found to be superior to the other methods in terms of computational efficiency.

  13. Spectral calibration for convex grating imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Chen, Xinhua; Ji, Yiqun; Chen, Yuheng; Shen, Weimin

    2013-12-01

    Spectral calibration of imaging spectrometer plays an important role for acquiring target accurate spectrum. There are two spectral calibration types in essence, the wavelength scanning and characteristic line sampling. Only the calibrated pixel is used for the wavelength scanning methods and he spectral response function (SRF) is constructed by the calibrated pixel itself. The different wavelength can be generated by the monochromator. The SRF is constructed by adjacent pixels of the calibrated one for the characteristic line sampling methods. And the pixels are illuminated by the narrow spectrum line and the center wavelength of the spectral line is exactly known. The calibration result comes from scanning method is precise, but it takes much time and data to deal with. The wavelength scanning method cannot be used in field or space environment. The characteristic line sampling method is simple, but the calibration precision is not easy to confirm. The standard spectroscopic lamp is used to calibrate our manufactured convex grating imaging spectrometer which has Offner concentric structure and can supply high resolution and uniform spectral signal. Gaussian fitting algorithm is used to determine the center position and the Full-Width-Half-Maximum(FWHM)of the characteristic spectrum line. The central wavelengths and FWHMs of spectral pixels are calibrated by cubic polynomial fitting. By setting a fitting error thresh hold and abandoning the maximum deviation point, an optimization calculation is achieved. The integrated calibration experiment equipment for spectral calibration is developed to enhance calibration efficiency. The spectral calibration result comes from spectral lamp method are verified by monochromator wavelength scanning calibration technique. The result shows that spectral calibration uncertainty of FWHM and center wavelength are both less than 0.08nm, or 5.2% of spectral FWHM.

  14. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  15. Nonsmooth Optimization Algorithms, System Theory, and Software Tools

    DTIC Science & Technology

    1993-04-13

    Solving Optimal Control Problems with...and D. Q. Mayne, "A Method of Centers Based on Barrier Functions for Solving Optimal Control Problems with Continuum State and Con- trol Constraints...Barrier Functions for Solving Optimal Control Problems with Continuum State and Con- trol Constraints", SIAMJ. Control and Opt., Vol.31, No. 1. pp

  16. Performance Comparison of Cuckoo Search and Differential Evolution Algorithm for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Iwan Solihin, Mahmud; Fauzi Zanil, Mohd

    2016-11-01

    Cuckoo Search (CS) and Differential Evolution (DE) algorithms are considerably robust meta-heuristic algorithms to solve constrained optimization problems. In this study, the performance of CS and DE are compared in solving the constrained optimization problem from selected benchmark functions. Selection of the benchmark functions are based on active or inactive constraints and dimensionality of variables (i.e. number of solution variable). In addition, a specific constraint handling and stopping criterion technique are adopted in the optimization algorithm. The results show, CS approach outperforms DE in term of repeatability and the quality of the optimum solutions.

  17. One-class classification based on the convex hull for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Zeng, Ming; Yang, Yu; Luo, Songrong; Cheng, Junsheng

    2016-12-01

    Originating from a nearest point problem, a novel method called one-class classification based on the convex hull (OCCCH) is proposed for one-class classification problems. The basic goal of OCCCH is to find the nearest point to the origin from the reduced convex hull of training samples. A generalized Gilbert algorithm is proposed to solve the nearest point problem. It is a geometric algorithm with high computational efficiency. OCCCH has two different forms, i.e., OCCCH-1 and OCCCH-2. The relationships among OCCCH-1, OCCCH-2 and one-class support vector machine (OCSVM) are investigated theoretically. The classification accuracy and the computational efficiency of the three methods are compared through the experiments conducted on several benchmark datasets. Experimental results show that OCCCH (including OCCCH-1 and OCCCH-2) using the generalized Gilbert algorithm performs more efficiently than OCSVM using the well-known sequential minimal optimization (SMO) algorithm; at the same time, OCCCH-2 can always obtain comparable classification accuracies to OCSVM. Finally, these methods are applied to the monitoring model constructions for bearing fault detection. Compared with OCCCH-2 and OCSVM, OCCCH-1 can significantly decrease the false alarm ratio while detecting the bearing fault successfully.

  18. Application of heuristic optimization techniques and algorithm tuning to multilayered sorptive barrier design.

    PubMed

    Matott, L Shawn; Bartelt-Hunt, Shannon L; Rabideau, Alan J; Fowler, K R

    2006-10-15

    Although heuristic optimization techniques are increasingly applied in environmental engineering applications, algorithm selection and configuration are often approached in an ad hoc fashion. In this study, the design of a multilayer sorptive barrier system served as a benchmark problem for evaluating several algorithm-tuning procedures, as applied to three global optimization techniques (genetic algorithms, simulated annealing, and particle swarm optimization). Each design problem was configured as a combinatorial optimization in which sorptive materials were selected for inclusion in a landfill liner to minimize the transport of three common organic contaminants. Relative to multilayer sorptive barrier design, study results indicate (i) the binary-coded genetic algorithm is highly efficient and requires minimal tuning, (ii) constraint violations must be carefully integrated to avoid poor algorithm convergence, and (iii) search algorithm performance is strongly influenced by the physical-chemical properties of the organic contaminants of concern. More generally, the results suggest that formal algorithm tuning, which has not been widely applied to environmental engineering optimization, can significantly improve algorithm performance and provide insight into the physical processes that control environmental systems.

  19. Active Batch Selection via Convex Relaxations with Guaranteed Solution Bounds.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Sun, Qian; Panchanathan, Sethuraman; Ye, Jieping

    2015-10-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar instances for manual annotation. More recently, there have been attempts towards a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. In this paper, we propose two novel batch mode active learning (BMAL) algorithms: BatchRank and BatchRand. We first formulate the batch selection task as an NP-hard optimization problem; we then propose two convex relaxations, one based on linear programming and the other based on semi-definite programming to solve the batch selection problem. Finally, a deterministic bound is derived on the solution quality for the first relaxation and a probabilistic bound for the second. To the best of our knowledge, this is the first research effort to derive mathematical guarantees on the solution quality of the BMAL problem. Our extensive empirical studies on 15 binary, multi-class and multi-label challenging datasets corroborate that the proposed algorithms perform at par with the state-of-the-art techniques, deliver high quality solutions and are robust to real-world issues like label noise and class imbalance.

  20. Null testing convex optical surfaces.

    PubMed

    Szulc, A

    1997-09-01

    A new test for convex optical surfaces is presented. It makes use of an auxiliary ellipsoidal mirror that is of approximately the same diameter as the convex mirror tested. The test is a null test of excellent precision. The auxiliary ellipsoid used is also tested in a null fashion, permitting good precision to be obtained.

  1. Hybrid particle swarm global optimization algorithm for phase diversity phase retrieval.

    PubMed

    Zhang, P G; Yang, C L; Xu, Z H; Cao, Z L; Mu, Q Q; Xuan, L

    2016-10-31

    The core problem of phase diversity phase retrieval (PDPR) is to find suitable optimization algorithms for wave-front sensing of different scales, especially for large-scale wavefront sensing. When dealing with large-scale wave-front sensing, existing gradient-based local optimization algorithms used in PDPR are easily trapped in local minimums near initial positions, and available global optimization algorithms possess low convergence efficiency. We construct a practicable optimization algorithm used in PDPR for large-scale wave-front sensing. This algorithm, named EPSO-BFGS, is a two-step hybrid global optimization algorithm based on the combination of evolutionary particle swarm optimization (EPSO) and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Firstly, EPSO provides global search and obtains a rough global minimum position in limited search steps. Then, BFGS initialized by the rough global minimum position approaches the global minimum with high accuracy and fast convergence speed. Numerical examples testify to the feasibility and reliability of EPSO-BFGS for wave-front sensing of different scales. Two numerical cases also validate the ability of EPSO-BFGS for large-scale wave-front sensing. The effectiveness of EPSO-BFGS is further affirmed by performing a verification experiment.

  2. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  3. A VLSI optimal constructive algorithm for classification problems

    SciTech Connect

    Beiu, V.; Draghici, S.; Sethi, I.K.

    1997-10-01

    If neural networks are to be used on a large scale, they have to be implemented in hardware. However, the cost of the hardware implementation is critically sensitive to factors like the precision used for the weights, the total number of bits of information and the maximum fan-in used in the network. This paper presents a version of the Constraint Based Decomposition training algorithm which is able to produce networks using limited precision integer weights and units with limited fan-in. The algorithm is tested on the 2-spiral problem and the results are compared with other existing algorithms.

  4. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning.

    PubMed

    Li, Yongjie; Yao, Dezhong; Yao, Jonathan; Chen, Wufan

    2005-08-07

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated.

  5. Efficient Algorithm for Optimizing Adaptive Quantum Metrology Processes

    NASA Astrophysics Data System (ADS)

    Hentschel, Alexander; Sanders, Barry C.

    2011-12-01

    Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.

  6. Evolving a Nelder-Mead Algorithm for Optimization with Genetic Programming.

    PubMed

    Fajfar, Iztok; Puhan, Janez; Bűrmen, Árpád

    2016-01-25

    We used genetic programming to evolve a direct search optimization algorithm, similar to that of the standard downhill simplex optimization method proposed by Nelder and Mead (1965). In the training process, we used several ten-dimensional quadratic functions with randomly displaced parameters and different randomly generated starting simplices. The genetically obtained optimization algorithm showed overall better performance than the original Nelder-Mead method on a standard set of test functions. We observed that many parts of the genetically produced algorithm were seldom or never executed, which allowed us to greatly simplify the algorithm by removing the redundant parts. The resulting algorithm turns out to be considerably simpler than the original Nelder-Mead method while still performing better than the original method.

  7. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  8. Structures vibration control via Tuned Mass Dampers using a co-evolution Coral Reefs Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.

    2017-04-01

    In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.

  9. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  10. A global optimization algorithm for simulation-based problems via the extended DIRECT scheme

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang

    2015-11-01

    This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.

  11. An evolutionary algorithm for global optimization based on self-organizing maps

    NASA Astrophysics Data System (ADS)

    Barmada, Sami; Raugi, Marco; Tucci, Mauro

    2016-10-01

    In this article, a new population-based algorithm for real-parameter global optimization is presented, which is denoted as self-organizing centroids optimization (SOC-opt). The proposed method uses a stochastic approach which is based on the sequential learning paradigm for self-organizing maps (SOMs). A modified version of the SOM is proposed where each cell contains an individual, which performs a search for a locally optimal solution and it is affected by the search for a global optimum. The movement of the individuals in the search space is based on a discrete-time dynamic filter, and various choices of this filter are possible to obtain different dynamics of the centroids. In this way, a general framework is defined where well-known algorithms represent a particular case. The proposed algorithm is validated through a set of problems, which include non-separable problems, and compared with state-of-the-art algorithms for global optimization.

  12. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  13. A near optimal guidance algorithm for aero-assisted orbit transfer

    NASA Astrophysics Data System (ADS)

    Calise, Anthony J.; Bae, Gyoung H.

    The paper presents a near optimal guidance algorithm for aero-assited orbit plane change, based on minimizing the energy loss during the atmospheric portion of the maneuver. The guidance algorithm makes use of recent results obtained from energy state approximations and singular perturbation analysis of optimal heading change for a hypersonic gliding vehicle. This earlier work ignored the terminal constraint on altitude needed to insure that the vehicle exits that atmosphere. Thus, the resulting guidance algorithm was only appropriate for maneuvering reentry vehicle guidance. In the context of singular perturbation theory, a constraint on final altitude gives rise to a difficult terminal boundary layer problem, which cannot be solved in closed form. This paper will demonstrate the near optimality of a predictive/corrective guidance algorithm for the terminal maneuver. Comparisons are made to numerically optimized trajectories for a range or orbit plane angles.

  14. A near optimal guidance algorithm for aero-assisted orbit transfer

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1988-01-01

    The paper presents a near optimal guidance algorithm for aero-assited orbit plane change, based on minimizing the energy loss during the atmospheric portion of the maneuver. The guidance algorithm makes use of recent results obtained from energy state approximations and singular perturbation analysis of optimal heading change for a hypersonic gliding vehicle. This earlier work ignored the terminal constraint on altitude needed to insure that the vehicle exits that atmosphere. Thus, the resulting guidance algorithm was only appropriate for maneuvering reentry vehicle guidance. In the context of singular perturbation theory, a constraint on final altitude gives rise to a difficult terminal boundary layer problem, which cannot be solved in closed form. This paper will demonstrate the near optimality of a predictive/corrective guidance algorithm for the terminal maneuver. Comparisons are made to numerically optimized trajectories for a range or orbit plane angles.

  15. Double global optimum genetic algorithm-particle swarm optimization-based welding robot path planning

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng

    2016-02-01

    Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.

  16. Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method

    NASA Astrophysics Data System (ADS)

    Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian

    2015-06-01

    We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.

  17. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  18. Comparison of several stochastic parallel optimization algorithms for adaptive optics system without a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Li, Xinyang

    2011-04-01

    Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.

  19. Practical aspects of variable reduction formulations and reduced basis algorithms in multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1995-01-01

    This paper discusses certain connections between nonlinear programming algorithms and the formulation of optimization problems for systems governed by state constraints. The major points of this paper are the detailed calculation of the sensitivities associated with different formulations of optimization problems and the identification of some useful relationships between different formulations. These relationships have practical consequences; if one uses a reduced basis nonlinear programming algorithm, then the implementations for the different formulations need only differ in a single step.

  20. Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Benford, Andrew; Tinker, Michael L.

    2004-01-01

    The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.

  1. Optimized multilevel codebook searching algorithm for vector quantization in image coding

    NASA Astrophysics Data System (ADS)

    Cao, Hugh Q.; Li, Weiping

    1996-02-01

    An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.

  2. A novel neural network for nonlinear convex programming.

    PubMed

    Gao, Xing-Bao

    2004-05-01

    In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  3. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  4. Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm

    PubMed Central

    Wang, Jinzhao

    2016-01-01

    We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234

  5. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  6. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  7. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    DTIC Science & Technology

    2009-03-10

    Orlando, FL, November 15-19, 2009. 2. Optimizing Concentrations of Alloying Elements and Tempering of Corrosion Resistant Aluminum Alloys (with...Optimization of Corrosion Resistant Aluminum Alloys ", M.Sc. degree in Mechanical Engineering, Florida International University, Miami, FL, expected...International Journal of Thermophysical Properties Research. 5. Evolutionary Wavelet Neural Network for Multidimensional Function Estimation in

  8. Annual Energy Production (AEP) optimization for tidal power plants based on Evolutionary Algorithms - Swansea Bay Tidal Power Plant AEP optimization

    NASA Astrophysics Data System (ADS)

    Kontoleontos, E.; Weissenberger, S.

    2016-11-01

    In order to be able to predict the maximum Annual Energy Production (AEP) for tidal power plants, an advanced AEP optimization procedure is required for solving the optimization problem which consists of a high number of design variables and constraints. This efficient AEP optimization procedure requires an advanced optimization tool (EASY software) and an AEP calculation tool that can simulate all different operating modes of the units (bidirectional turbine, pump and sluicing mode). The EASY optimization software is a metamodel-assisted Evolutionary Algorithm (MAEA) that can be used in both single- and multi-objective optimization problems. The AEP calculation tool, developed by ANDRITZ HYDRO, in combination with EASY is used to maximize the tidal annual energy produced by optimizing the plant operation throughout the year. For the Swansea Bay Tidal Power Plant project, the AEP optimization along with the hydraulic design optimization and the model testing was used to evaluate all different hydraulic and operating concepts and define the optimal concept that led to a significant increase of the AEP value. This new concept of a triple regulated “bi-directional bulb pump turbine” for Swansea Bay Tidal Power Plant (16 units, nominal power above 320 MW) along with its AEP optimization scheme will be presented in detail in the paper. Furthermore, the use of an online AEP optimization during operation of the power plant, that will provide the optimal operating points to the control system, will be also presented.

  9. An Adaptive Cauchy Differential Evolution Algorithm for Global Numerical Optimization

    PubMed Central

    Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

    2013-01-01

    Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445

  10. An adaptive Cauchy differential evolution algorithm for global numerical optimization.

    PubMed

    Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

    2013-01-01

    Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems.

  11. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    SciTech Connect

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-12-15

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.

  12. Convex Analysis of Mixtures for Separating Non-negative Well-grounded Sources

    PubMed Central

    Zhu, Yitan; Wang, Niya; Miller, David J.; Wang, Yue

    2016-01-01

    Blind Source Separation (BSS) is a powerful tool for analyzing composite data patterns in many areas, such as computational biology. We introduce a novel BSS method, Convex Analysis of Mixtures (CAM), for separating non-negative well-grounded sources, which learns the mixing matrix by identifying the lateral edges of the convex data scatter plot. We propose and prove a sufficient and necessary condition for identifying the mixing matrix through edge detection in the noise-free case, which enables CAM to identify the mixing matrix not only in the exact-determined and over-determined scenarios, but also in the under-determined scenario. We show the optimality of the edge detection strategy, even for cases where source well-groundedness is not strictly satisfied. The CAM algorithm integrates plug-in noise filtering using sector-based clustering, an efficient geometric convex analysis scheme, and stability-based model order selection. The superior performance of CAM against a panel of benchmark BSS techniques is demonstrated on numerically mixed gene expression data of ovarian cancer subtypes. We apply CAM to dissect dynamic contrast-enhanced magnetic resonance imaging data taken from breast tumors and time-course microarray gene expression data derived from in-vivo muscle regeneration in mice, both producing biologically plausible decomposition results. PMID:27922124

  13. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far

  14. Algorithm to optimize transient hot-wire thermal property measurement.

    PubMed

    Bran-Anleu, Gabriela; Lavine, Adrienne S; Wirz, Richard E; Kavehpour, H Pirouz

    2014-04-01

    The transient hot-wire method has been widely used to measure the thermal conductivity of fluids. The ideal working equation is based on the solution of the transient heat conduction equation for an infinite linear heat source assuming no natural convection or thermal end effects. In practice, the assumptions inherent in the model are only valid for a portion of the measurement time. In this study, an algorithm was developed to automatically select the proper data range from a transient hot-wire experiment. Numerical simulations of the experiment were used in order to validate the algorithm. The experimental results show that the developed algorithm can be used to improve the accuracy of thermal conductivity measurements.

  15. Preconditioning 2D Integer Data for Fast Convex Hull Computations.

    PubMed

    Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

  16. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  17. Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.

    PubMed

    Yamada, N; Nishikawa, T

    2010-06-21

    In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.

  18. An optimization algorithm for multipath parallel allocation for service resource in the simulation task workflow.

    PubMed

    Wang, Zhiteng; Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang

    2014-01-01

    Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.

  19. SOS! An algorithm and software for the stochastic optimization of stimuli.

    PubMed

    Armstrong, Blair C; Watson, Christine E; Plaut, David C

    2012-09-01

    The characteristics of the stimuli used in an experiment critically determine the theoretical questions the experiment can address. Yet there is relatively little methodological support for selecting optimal sets of items, and most researchers still carry out this process by hand. In this research, we present SOS, an algorithm and software package for the stochastic optimization of stimuli. SOS takes its inspiration from a simple manual stimulus selection heuristic that has been formalized and refined as a stochastic relaxation search. The algorithm rapidly and reliably selects a subset of possible stimuli that optimally satisfy the constraints imposed by an experimenter. This allows the experimenter to focus on selecting an optimization problem that suits his or her theoretical question and to avoid the tedious task of manually selecting stimuli. We detail how this optimization algorithm, combined with a vocabulary of constraints that define optimal sets, allows for the quick and rigorous assessment and maximization of the internal and external validity of experimental items. In doing so, the algorithm facilitates research using factorial, multiple/mixed-effects regression, and other experimental designs. We demonstrate the use of SOS with a case study and discuss other research situations that could benefit from this tool. Support for the generality of the algorithm is demonstrated through Monte Carlo simulations on a range of optimization problems faced by psychologists. The software implementation of SOS and a user manual are provided free of charge for academic purposes as precompiled binaries and MATLAB source files at http://sos.cnbc.cmu.edu.

  20. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    SciTech Connect

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-06-15

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment