Phase retrieval using iterative Fourier transform and convex optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fen; Cheng, Hong; Zhang, Quanbing; Wei, Sui
2015-05-01
Phase is an inherent characteristic of any wave field. Statistics show that greater than 25% of the information is encoded in the amplitude term and 75% of the information is in the phase term. The technique of phase retrieval means acquire phase by computation using magnitude measurements and provides data information for holography display, 3D field reconstruction, X-ray crystallography, diffraction imaging, astronomical imaging and many other applications. Mathematically, solving phase retrieval problem is an inverse problem taking the physical and computation constraints. Some recent algorithms use the principle of compressive sensing, such as PhaseLift, PhaseCut and compressive phase retrieval etc. they formulate phase retrieval problems as one of finding the rank-one solution to a system of linear matrix equations and make the overall algorithm a convex program over n × n matrices. However, by "lifting" a vector problem to a matrix one, these methods lead to a much higher computational cost as a result. Furthermore, they only use intensity measurements but few physical constraints. In the paper, a new algorithm is proposed that combines above convex optimization methods with a well known iterative Fourier transform algorithm (IFTA). The IFTA iterates between the object domain and spectral domain to reinforce the physical information and reaches convergence quickly which has been proved in many applications such as compute-generated-hologram (CGH). Herein the output phase of the IFTA is treated as the initial guess of convex optimization methods, and then the reconstructed phase is numerically computed by using modified TFOCS. Simulation results show that the combined algorithm increases the likelihood of successful recovery as well as improves the precision of solution.
NASA Astrophysics Data System (ADS)
Cevher, Volkan; Becker, Stephen; Schmidt, Mark
2014-09-01
This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new Big Data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.
Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
NASA Astrophysics Data System (ADS)
Daneshmand, Amir; Facchinei, Francisco; Kungurtsev, Vyacheslav; Scutari, Gesualdo
2015-08-01
We propose a decomposition framework for the parallel optimization of the sum of a differentiable {(possibly nonconvex)} function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel \\emph{parallel, hybrid random/deterministic} decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing local convex approximations of the original nonconvex function. To tackle with huge-scale problems, the (block) variables to be updated are chosen according to a \\emph{mixed random and deterministic} procedure, which captures the advantages of both pure deterministic and random update-based schemes. Almost sure convergence of the proposed scheme is established. Numerical results show that on huge-scale problems the proposed hybrid random/deterministic algorithm outperforms both random and deterministic schemes.
Implementation of a Point Algorithm for Real-Time Convex Optimization
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Motaghedi, Shui; Carson, John
2007-01-01
The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Adaptive Algorithms for Planar Convex Hull Problems
NASA Astrophysics Data System (ADS)
Ahn, Hee-Kap; Okamoto, Yoshio
We study problems in computational geometry from the viewpoint of adaptive algorithms. Adaptive algorithms have been extensively studied for the sorting problem, and in this paper we generalize the framework to geometric problems. To this end, we think of geometric problems as permutation (or rearrangement) problems of arrays, and define the "presortedness" as a distance from the input array to the desired output array. We call an algorithm adaptive if it runs faster when a given input array is closer to the desired output, and furthermore it does not make use of any information of the presortedness. As a case study, we look into the planar convex hull problem for which we discover two natural formulations as permutation problems. An interesting phenomenon that we prove is that for one formulation the problem can be solved adaptively, but for the other formulation no adaptive algorithm can be better than an optimal output-sensitive algorithm for the planar convex hull problem.
A Convex Guidance Algorithm for Formation Reconfiguration
NASA Technical Reports Server (NTRS)
Acikmese, A. Behcet; Schar, Daniel P.; Murray, Emmanuell A.; Hadaeghs, Fred Y.
2006-01-01
In this paper, a reconfiguration guidance algorithm for formation flying spacecraft is presented. The formation reconfiguration guidance problem is first formulated as a continuous-time minimum-fuel or minimum-energy optimal control problem with collision avoidance and control constraints. The optimal control problem is then discretized to obtain a finite dimensional parameter optimization problem. In this formulation, the collision avoidance constraints are imposed via separating planes between each pair of spacecraft. A heuristic is introduced to choose these separating planes that leads to the convexification of the collision avoidance constraints. Additionally, convex constraints are imposed to guarantee that no collisions occur between discrete time samples. The resulting finite dimensional optimization problem is a second order cone program, for which standard algorithms can compute the global optimum with deterministic convergence and a prescribed level of accuracy. Consequently, the formation reconfiguration algorithm can be implemented onboard a spacecraft for real-time operations.
Hoia An, Le Thi; Tao, P.D.
1994-12-31
We present new algorithm of d.c. optimization (DCA) for globally minimizing a (convex or nonconvex) quadratic form on an Euclidean ball or sphere: min{l_brace}{1/2}x{sup T} A x + b{sup T}x : {parallel}x{parallel} {<=} r{r_brace} (Q1) min{l_brace}{1/2}x{sup T} Ax + b{sup T}x : {parallel}x{parallel} = r{r_brace} (Q2) where A is n {times} n real symmetrix matrix, b {element_of} IR{sup n}, r is a positive number. DCA is attractive because it is computationally inexpensive and quite reliable. For a {open_quotes}good{close_quotes} d.c. decomposition of the objective function, we propose a simple DCA to solve (Q1). This algorithm can be also applied to solving (Q2). Numerical simulations on a series of test problems are reported. They show robustness, stability and superiority of DCA with respect to known standard methods in the literature. The use of DCA in Trust Region methods, in Constrained Eigenvalue problem and in Least Squares with Quadratic constraints is consequently very interesting.
A convex hull algorithm for neural networks
Wennyre, E. )
1989-11-01
A convex hull algorithm for neural networks is presented. It is applicable in both two and three dimensions, and has a time complexity of O(N) for the off-line case, O(log N) for the on-line case in two dimensions, and O(hN), O(N), respectively, for three dimensions (h is the number of faces in the convex hull). The constant bounding the complexity is expected to be very small.
Sparse recovery via convex optimization
NASA Astrophysics Data System (ADS)
Randall, Paige Alicia
This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization program which can be written as either a linear program or a second-order cone program, and the well-established machinery of convex optimization used to solve it rapidly.
Robust boosting via convex optimization
NASA Astrophysics Data System (ADS)
Rätsch, Gunnar
2001-12-01
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems? Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties. o Can boosting techniques be useful in practice? The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. --- Anmerkung: Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002. In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen. Die Arbeit behandelt folgende Sachverhalte: o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschl
Advances in dual algorithms and convex approximation methods
NASA Technical Reports Server (NTRS)
Smaoui, H.; Fleury, C.; Schmit, L. A.
1988-01-01
A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.
Algorithms for the Computation of Reduced Convex Hulls
NASA Astrophysics Data System (ADS)
Goodrich, Ben; Albrecht, David; Tischer, Peter
Geometric interpretations of Support Vector Machines (SVMs) have introduced the concept of a reduced convex hull. A reduced convex hull is the set of all convex combinations of a set of points where the weight any single point can be assigned is bounded from above by a constant. This paper decouples reduced convex hulls from their origins in SVMs and allows them to be constructed independently. Two algorithms for the computation of reduced convex hulls are presented - a simple recursive algorithm for points in the plane and an algorithm for points in an arbitrary dimensional space. Upper bounds on the number of vertices and facets in a reduced convex hull are used to analyze the worst-case complexity of the algorithms.
Qhull: Quickhull algorithm for computing the convex hull
NASA Astrophysics Data System (ADS)
Barber, C. Bradford; Dobkin, David P.; Huhdanpaa, Hannu
2013-04-01
Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull. It handles roundoff errors from floating point arithmetic. It computes volumes, surface areas, and approximations to the convex hull.
First-order convex feasibility algorithms for x-ray CT
Sidky, Emil Y.; Pan Xiaochuan; Jorgensen, Jakob S.
2013-03-15
Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution-thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144 Degree-Sign . The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application.
ERIC Educational Resources Information Center
Berger, Marcel
1990-01-01
Discussed are the idea, examples, problems, and applications of convexity. Topics include historical examples, definitions, the John-Loewner ellipsoid, convex functions, polytopes, the algebraic operation of duality and addition, and topology of convex bodies. (KR)
Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization
NASA Technical Reports Server (NTRS)
Polyak, Roman; Teboulle, Marc
1997-01-01
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth scaling function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear resealing algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.
NASA Astrophysics Data System (ADS)
Ju, Wenqi; Luo, Jun
Given a set of n equal size and non-overlapping axis-aligned squares, we need to choose exactly one point in each square to make the area of a convex hull of the resulting point set as large as possible. Previous algorithm [10] on this problem gives an optimal algorithm with O(n 3) running time. In this paper, we propose an approximation algorithm which runs in O(nlogn) time and gives a convex hull with area larger than the area of the optimal convex hull minus the area of one square.
Computable optimal value bounds for generalized convex programs
NASA Technical Reports Server (NTRS)
Fiacco, Anthony V.; Kyparisis, Jerzy
1987-01-01
It has been shown by Fiacco that convexity or concavity of the optimal value of a parametric nonlinear programming problem can readily be exploited to calculate global parametric upper and lower bounds on the optimal value function. The approach is attractive because it involves manipulation of information normally required to characterize solution optimality. A procedure is briefly described for calculating and improving the bounds as well as its extensions to generalized convex and concave functions. Several areas of applications are also indicated.
Multiband RF pulses with improved performance via convex optimization
NASA Astrophysics Data System (ADS)
Shang, Hong; Larson, Peder E. Z.; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W.; Ohliger, Michael A.; Pauly, John M.; Lustig, Michael; Vigneron, Daniel B.
2016-01-01
Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) 13C MRI, a dualband saturation RF pulse for 1H MR spectroscopy, and a pre-saturation pulse for HP 13C study were developed and tested.
Multiband RF pulses with improved performance via convex optimization.
Shang, Hong; Larson, Peder E Z; Kerr, Adam; Reed, Galen; Sukumar, Subramaniam; Elkhaled, Adam; Gordon, Jeremy W; Ohliger, Michael A; Pauly, John M; Lustig, Michael; Vigneron, Daniel B
2016-01-01
Selective RF pulses are commonly designed with the desired profile as a low pass filter frequency response. However, for many MRI and NMR applications, the spectrum is sparse with signals existing at a few discrete resonant frequencies. By specifying a multiband profile and releasing the constraint on "don't-care" regions, the RF pulse performance can be improved to enable a shorter duration, sharper transition, or lower peak B1 amplitude. In this project, a framework for designing multiband RF pulses with improved performance was developed based on the Shinnar-Le Roux (SLR) algorithm and convex optimization. It can create several types of RF pulses with multiband magnitude profiles, arbitrary phase profiles and generalized flip angles. The advantage of this framework with a convex optimization approach is the flexible trade-off of different pulse characteristics. Designs for specialized selective RF pulses for balanced SSFP hyperpolarized (HP) (13)C MRI, a dualband saturation RF pulse for (1)H MR spectroscopy, and a pre-saturation pulse for HP (13)C study were developed and tested. PMID:26754063
Algorithm for detecting human faces based on convex-hull
NASA Astrophysics Data System (ADS)
Park, Minsick; Park, Chang-Woo; Park, Mignon; Lee, Chang-Hoon
2002-03-01
In this paper, we proposed a new method to detect faces in color based on the convex-hull. We detect two kinds of regions that are skin and hair likeness region. After preprocessing, we apply the convex-hull to their regions and can find a face from their intersection relationship. The proposed algorithm can accomplish face detection in an image involving rotated and turned faces as well as several faces. To validity the effectiveness of the proposed method, we make experiment with various cases.
Algorithm for detecting human faces based on convex-hull.
Park, Minsick; Park, Chang-Woo; Park, Mignon; Lee, Chang-Hoon
2002-03-25
In this paper, we proposed a new method to detect faces in color based on the convex-hull. We detect two kinds of regions that are skin and hair likeness region. After preprocessing, we apply the convex-hull to their regions and can find a face from their intersection relationship. The proposed algorithm can accomplish face detection in an image involving rotated and turned faces as well as several faces. To validity the effectiveness of the proposed method, we make experiment with various cases. PMID:19436356
Convex optimization under inequality constraints in rank-deficient systems
NASA Astrophysics Data System (ADS)
Roese-Koerner, Lutz; Schuh, Wolf-Dieter
2014-05-01
Many geodetic applications require the minimization of a convex objective function subject to some linear equality and/or inequality constraints. If a system is singular (e.g., a geodetic network without a defined datum) this results in a manifold of solutions. Most state-of-the-art algorithms for inequality constrained optimization (e.g., the Active-Set-Method or primal-dual Interior-Point-Methods) are either not able to deal with a rank-deficient objective function or yield only one of an infinite number of particular solutions. In this contribution, we develop a framework for the rigorous computation of a general solution of a rank-deficient problem with inequality constraints. We aim for the computation of a unique particular solution which fulfills predefined optimality criteria as well as for an adequate representation of the homogeneous solution including the constraints. Our theoretical findings are applied in a case study to determine optimal repetition numbers for a geodetic network to demonstrate the potential of the proposed framework.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
An Efficient Algorithm for the Convex Hull of Planar Scattered Point Set
NASA Astrophysics Data System (ADS)
Fu, Z.; Lu, Y.
2012-07-01
Computing the convex hull of a point set is requirement in the GIS applications. This paper studies on the problem of minimum convex hull and presents an improved algorithm for the minimum convex hull of planar scattered point set. It adopts approach that dividing the point set into several sub regions to get an initial convex hull boundary firstly. Then the points on the boundary, which cannot be vertices of the minimum convex hull, are removed one by one. Finally the concave points on the boundary, which cannot be vertices of the minimum convex hull, are withdrew. Experimental analysis shows the efficiency of the algorithm compared with other methods.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Application of convex optimization to acoustical array signal processing
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Chen, Ching-Cheng
2013-12-01
This paper demonstrates that optimum weighting coefficients and inverse filters for microphone arrays can be accomplished, with the aid of a systematic methodology of mathematical programming. Both far-field and near-field array problems are formulated in terms of convex optimization formalism. Three application examples, including data-independent far-field array design, nearfield array design, and pressure field interpolation, are presented. In far-field array design, array coefficients are optimized to tradeoff Directivity Index for White Noise Gain or the coefficient norm, while in nearfield array convex optimization is applied to design Equivalent Source Method-based Nearfield Acoustical Holography. Numerical examples are given for designing a far-field two-dimensional random array comprised of thirty microphones. For far-field arrays, five design approaches, including a Delay-And-Sum beamformer, a Super Directivity Array, three optimal arrays designed using ?1,?2, and ??-norms, are compared. Numerical and experimental results have shown that sufficiently high White Noise Gain was crucial to robust performance of array against sensor mismatch and noise. For nearfield arrays, inverse filters were designed in light of Equivalent Source Method and convex optimization to reconstruct the velocity field on a baffled spherical piston source. The proposed nearfield design is benchmarked by those designed using Truncated Singular Value Decomposition and Tikhonov Regularization. Compressive Sampling and convex optimization is applied to pressure field reconstruction, source separation and modal analysis with satisfactory performance in both near-field and far-field microphone arrays.
An algorithm for determining the convex hull of random points
Borgwardt, K.H.
1994-12-31
This talk presents an accelerated Gift-Wrapping Algorithm for constructing and calculating the convex hull of random points in n-dimensional Euclidean space. The method under consideration is a combination of Gift-Wrapping with an implicit and dynamical Throw-Away-Principle. Exploiting some advantages of the revised Simplex-Method, the algorithm carries out a walk on the surface of the desired polytope and visits all faces. For this algorithm we develop a probabilistic analysis. Let the random points be distributed identically, independently and symmetrically under rotations. We succeed in calculating the corresponding mean values for the computational effort for the parameterized family of distributions over the n-dimensional unit ball.
COMMIT: Convex optimization modeling for microstructure informed tractography.
Daducci, Alessandro; Dal PalÃ¹, Alessandro; Lemkaddem, Alia; Thiran, Jean-Philippe
2015-01-01
Tractography is a class of algorithms aiming at in vivo mapping the major neuronal pathways in the white matter from diffusion magnetic resonance imaging (MRI) data. These techniques offer a powerful tool to noninvasively investigate at the macroscopic scale the architecture of the neuronal connections of the brain. However, unfortunately, the reconstructions recovered with existing tractography algorithms are not really quantitative even though diffusion MRI is a quantitative modality by nature. As a matter of fact, several techniques have been proposed in recent years to estimate, at the voxel level, intrinsic microstructural features of the tissue, such as axonal density and diameter, by using multicompartment models. In this paper, we present a novel framework to reestablish the link between tractography and tissue microstructure. Starting from an input set of candidate fiber-tracts, which are estimated from the data using standard fiber-tracking techniques, we model the diffusion MRI signal in each voxel of the image as a linear combination of the restricted and hindered contributions generated in every location of the brain by these candidate tracts. Then, we seek for the global weight of each of them, i.e., the effective contribution or volume, such that they globally fit the measured signal at best. We demonstrate that these weights can be easily recovered by solving a global convex optimization problem and using efficient algorithms. The effectiveness of our approach has been evaluated both on a realistic phantom with known ground-truth and in vivo brain data. Results clearly demonstrate the benefits of the proposed formulation, opening new perspectives for a more quantitative and biologically plausible assessment of the structural connectivity of the brain. PMID:25167548
First and second order convex approximation strategies in structural optimization
NASA Technical Reports Server (NTRS)
Fleury, C.
1989-01-01
In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.
Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging
NASA Astrophysics Data System (ADS)
Girard, J. N.; Garsden, H.; Starck, J. L.; Corbel, S.; Woiselle, A.; Tasse, C.; McKean, J. P.; Bobin, J.
2015-08-01
Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ? 2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
Convex hull based neuro-retinal optic cup ellipse optimization in glaucoma diagnosis.
Zhang, Zhuo; Liu, Jiang; Cherian, Neetu Sara; Sun, Ying; Lim, Joo Hwee; Wong, Wing Kee; Tan, Ngan Meng; Lu, Shijian; Li, Huiqi; Wong, Tien Ying
2009-01-01
Glaucoma is the second leading cause of blindness. Glaucoma can be diagnosed through measurement of neuro-retinal optic cup-to-disc ratio (CDR). Automatic calculation of optic cup boundary is challenging due to the interweavement of blood vessels with the surrounding tissues around the cup. A Convex Hull based Neuro-Retinal Optic Cup Ellipse Optimization algorithm improves the accuracy of the boundary estimation. The algorithm's effectiveness is demonstrated on 70 clinical patient's data set collected from Singapore Eye Research Institute. The root mean squared error of the new algorithm is 43% better than the ARGALI system which is the state-of-the-art. This further leads to a large clinical evaluation of the algorithm involving 15 thousand patients from Australia and Singapore. PMID:19963748
Near-optimal deterministic algorithms for volume computation via M-ellipsoids
Dadush, Daniel; Vempala, Santosh S.
2013-01-01
We give a deterministic algorithm for computing an M-ellipsoid of a convex body, matching a known lower bound. This leads to a nearly optimal deterministic algorithm for estimating the volume of a convex body and improved deterministic algorithms for fundamental lattice problems under general norms.
Optimization-based mesh correction with volume and convexity constraints
D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail
2016-02-24
Here, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. Also, this volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problemmoreÂ Â» in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.Â«Â less
A Localization Method for Multistatic SAR Based on Convex Optimization
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model functionâ€™s maximum is on the circumference of the ellipse which is the iso-range for its model functionâ€™s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031
A Localization Method for Multistatic SAR Based on Convex Optimization.
Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031
Optimal transportation in for a distance cost with a convex constraint
NASA Astrophysics Data System (ADS)
Chen, Ping; Jiang, Feida; Yang, Xiao-Ping
2015-06-01
We prove existence of an optimal transportation map for the Monge-Kantorovich's problem associated with a cost function c( x, y) with a convex constraint in . The cost function coincides with the Euclidean distance | x - y| if the displacement y - x belongs to a given closed convex set C with at most countable flat parts and it is infinite otherwise.
SLOPEâ€”ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION
Bogdan, MaÅ‚gorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; CandÃ¨s, Emmanuel J.
2015-01-01
We introduce a new estimator for the vector of coefficients Î² in the linear model y = XÎ² + z, where X has dimensions n Ã— p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minbâˆˆâ„p12â€–yâˆ’Xbâ€–â„“22+Î»1|b|(1)+Î»2|b|(2)+â‹¯+Î»p|b|(p),where Î»1 â‰¥ Î»2 â‰¥ â€¦ â‰¥ Î»p â‰¥ 0 and |b|(1)â‰¥|b|(2)â‰¥â‹¯â‰¥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical â„“1 procedures such as the Lasso. Here, the regularizer is a sorted â„“1 norm, which penalizes the regression coefficients according to their rank: the higher the rankâ€”that is, stronger the signalâ€”the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289â€“300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {Î»i} is given by the BH critical values Î»BH(i)=z(1âˆ’iâ‹…q/2p), where q âˆˆ (0, 1) and z(Î±) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with Î»BH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357
[3-D endocardial surface modelling based on the convex hull algorithm].
Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong
2006-11-01
In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK. PMID:17300005
A scalable projective scaling algorithm for l(p) loss with convex penalizations.
Zhou, Hongbo; Cheng, Qiang
2015-02-01
This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems. PMID:25608289
libCreme: An optimization library for evaluating convex-roof entanglement measures
NASA Astrophysics Data System (ADS)
Röthlisberger, Beat; Lehmann, Jörg; Loss, Daniel
2012-01-01
We present the software library libCreme which we have previously used to successfully calculate convex-roof entanglement measures of mixed quantum states appearing in realistic physical systems. Evaluating the amount of entanglement in such states is in general a non-trivial task requiring to solve a highly non-linear complex optimization problem. The algorithms provided here are able to achieve to do this for a large and important class of entanglement measures. The library is mostly written in the MATLAB programming language, but is fully compatible to the free and open-source OCTAVE platform. Some inefficient subroutines are written in C/C++ for better performance. This manuscript discusses the most important theoretical concepts and workings of the algorithms, focusing on the actual implementation and usage within the library. Detailed examples in the end should make it easy for the user to apply libCreme to specific problems. Program summaryProgram title:libCreme Catalogue identifier: AEKD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL version 3 No. of lines in distributed program, including test data, etc.: 4323 No. of bytes in distributed program, including test data, etc.: 70 542 Distribution format: tar.gz Programming language: Matlab/Octave and C/C++ Computer: All systems running Matlab or Octave Operating system: All systems running Matlab or Octave Classification: 4.9, 4.15 Nature of problem: Evaluate convex-roof entanglement measures. This involves solving a non-linear (unitary) optimization problem. Solution method: Two algorithms are provided: A conjugate-gradient method using a differential-geometric approach and a quasi-Newton method together with a mapping to Euclidean space. Running time: Typically seconds to minutes for a density matrix of a few low-dimensional systems and a decent implementation of the pure-state entanglement measure.
The convex wrapping algorithm: a method for identifying muscle paths using the underlying bone mesh.
Desailly, Eric; Sardain, Philippe; Khouri, Nejib; Yepremian, Daniel; Lacouture, Patrick
2010-09-17
Associating musculoskeletal models to motion analysis data enables the determination of the muscular lengths, lengthening rates and moment arms of the muscles during the studied movement. Therefore, those models must be anatomically personalized and able to identify realistic muscular paths. Different kinds of algorithms exist to achieve this last issue, such as the wired models and the finite elements ones. After having studied the advantages and drawbacks of each one, we present the convex wrapping algorithm. Its purpose is to identify the shortest path from the origin to the insertion of a muscle wrapping over the underlying skeleton mesh while respecting possible non-sliding constraints. After the presentation of the algorithm, the results obtained are compared to a classically used wrapping surface algorithm (obstacle set method) by measuring the length and moment arm of the semitendinosus muscle during an asymptomatic gait. The convex wrapping algorithm gives an efficient and realistic way of identifying the muscular paths with respect to the underlying bones mesh without the need to define simplified geometric forms. It also enables the identification of the centroid path of the muscles if their thickness evolution function is known. All this presents a particular interest when studying populations presenting noticeable bone deformations, such as those observed in cerebral palsy or rheumatic pathologies. PMID:20627304
An Effective Branch-and-Bound Algorithm for Convex Quadratic Integer Programming
NASA Astrophysics Data System (ADS)
Buchheim, Christoph; Caprara, Alberto; Lodi, Andrea
We present a branch-and-bound algorithm for minimizing a convex quadratic objective function over integer variables subject to convex constraints. In a given node of the enumeration tree, corresponding to the fixing of a subset of the variables, a lower bound is given by the continuous minimum of the restricted objective function. We improve this bound by exploiting the integrality of the variables using suitably-defined lattice-free ellipsoids. Experiments show that our approach is very fast on both unconstrained problems and problems with box constraints. The main reason is that all expensive calculations can be done in a preprocessing phase, while a single node in the enumeration tree can be processed in linear time in the problem dimension.
NASA Astrophysics Data System (ADS)
Panicker, Rahul Alex
Multimode fibers (MMF) are widely deployed in local-, campus-, and storage-area-networks. Achievable data rates and transmission distances are, however, limited by the phenomenon of modal dispersion. We propose a system to compensate for modal dispersion using adaptive optics. This leads to a 10- to 100-fold improvement in performance over current standards. We propose a provably optimal technique for minimizing inter-symbol interference (ISI) in MMF systems using adaptive optics via convex optimization. We use a spatial light modulator (SLM) to shape the spatial profile of light launched into an MMF. We derive an expression for the system impulse response in terms of the SLM reflectance and the field patterns of the MMF principal modes. Finding optimal SLM settings to minimize ISI, subject to physical constraints, is posed as an optimization problem. We observe that our problem can be cast as a second-order cone program, which is a convex optimization problem. Its global solution can, therefore, be found with minimal computational complexity. Simulations show that this technique opens up an eye pattern originally closed due to ISI. We then propose fast, low-complexity adaptive algorithms for optimizing the SLM settings. We show that some of these converge to the global optimum in the absence of noise. We also propose modified versions of these algorithms to improve resilience to noise and speed of convergence. Next, we experimentally compare the proposed adaptive algorithms in 50-mum graded-index (GRIN) MMFs using a liquid-crystal SLM. We show that continuous-phase sequential coordinate ascent (CPSCA) gives better bit-error-ratio performance than 2- or 4-phase sequential coordinate ascent, in concordance with simulations. We evaluate the bandwidth characteristics of CPSCA, and show that a single SLM is able to simultaneously compensate over up to 9 wavelength-division-multiplexed (WDM) 10-Gb/s channels, spaced by 50 GHz, over a total bandwidth of 450 GHz. We also show that CPSCA is able to compensate for modal dispersion over up to 2.2 km, even in the presence of mid-span connector offsets up to 4 mum (simulated in experiment by offset splices). A known non-adaptive launching technique using a fusion-spliced single-mode-to-multimode patchcord is shown to fail under these conditions. Finally, we demonstrate 10 x 10 Gb/s dense WDM transmission over 2.2 km of 50-mum GRIN MMF. We combine transmitter-based adaptive optics and receiver-based single-mode filtering, and control the launched field pattern for ten 10-Gb/s non-return-to-zero channels, wavelength-division multiplexed on a 200-GHz grid in the C band. We achieve error-free transmission through 2.2 km of 50-mum GRIN MMF for launch offsets up to 10 mum and for worst-case launched polarization. We employ a ten-channel transceiver based on parallel integration of electronics and photonics.
NASA Astrophysics Data System (ADS)
Chen, Peijun; Huang, Jianguo; Zhang, Xiaoqun
2013-02-01
Recently, the minimization of a sum of two convex functions has received considerable interest in a variational image restoration model. In this paper, we propose a general algorithmic framework for solving a separable convex minimization problem from the point of view of fixed point algorithms based on proximity operators (Moreau 1962 C. R. Acad. Sci., Paris I 255 2897-99). Motivated by proximal forward-backward splitting proposed in Combettes and Wajs (2005 Multiscale Model. Simul. 4 1168-200) and fixed point algorithms based on the proximity operator (FP2O) for image denoising (Micchelli et al 2011 Inverse Problems 27 45009-38), we design a primal-dual fixed point algorithm based on the proximity operator (PDFP2O? for ? ? [0, 1)) and obtain a scheme with a closed-form solution for each iteration. Using the firmly nonexpansive properties of the proximity operator and with the help of a special norm over a product space, we achieve the convergence of the proposed PDFP2O? algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods. Finally, we illustrate the efficiency of PDFP2O? through some numerical examples on image supper-resolution, computerized tomographic reconstruction and parallel magnetic resonance imaging. Generally speaking, our method PDFP2O (? = 0) is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications.
Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2014-01-01
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation–elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics. PMID:25232050
Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2014-11-01
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics. PMID:25232050
Exact Convex Relaxation of Optimal Power Flow in Radial Networks
Gan, LW; Li, N; Topcu, U; Low, SH
2015-01-01
The optimal power flow (OPF) problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. It is nonconvex. We prove that a global optimum of OPF can be obtained by solving a second-order cone program, under a mild condition after shrinking the OPF feasible set slightly, for radial power networks. The condition can be checked a priori, and holds for the IEEE 13, 34, 37, 123-bus networks and two real-world networks.
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Barratt, Craig H.
1990-01-01
Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.
A two-layer recurrent neural network for nonsmooth convex optimization problems.
Qin, Sitian; Xue, Xiaoping
2015-06-01
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems. PMID:25051563
Convex array vector velocity imaging using transverse oscillation and its optimization.
Jensen, Jørgen Arendt; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann
2015-12-01
A method for obtaining vector flow images using the transverse oscillation (TO) approach on a convex array is presented. The paper presents optimization schemes for TO fields and evaluates their performance using simulations and measurements with an experimental scanner. A 3-MHz 192-element convex array probe (pitch 0.33 mm) is used in both simulations and measurements. A parabolic velocity profile is simulated at a beam-to-flow angle of 90°. The optimization routine changes the lateral oscillation period ?? as a function of depth to yield the best possible estimates based on the energy ratio between positive and negative spatial frequencies in the ultrasound field. The energy ratio is reduced from -17.1 dB to -22.1 dB. Parabolic profiles are estimated on simulated data using 16 emissions. The optimization gives a reduction in standard deviation from 8.81% to 7.4% for 16 emissions, with a reduction in lateral velocity bias from -15.93% to 0.78% at 90° (transverse flow) at a depth of 40 mm. Measurements have been performed using the experimental ultrasound scanner and a convex array transducer. A bias of -0.93% was obtained at 87° for a parabolic velocity profile along with a standard deviation of 6.37%. The livers of two healthy volunteers were scanned using the experimental setup. The in vivo images demonstrate that the method yields realistic estimates with a consistent angle and mean velocity across three heart cycles. PMID:26670846
Craft, David
2009-01-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. PMID:20022275
Convexity of Ruin Probability and Optimal Dividend Strategies for a General LÃ©vy Process
Yin, Chuancun; Yuen, Kam Chuen; Shen, Ying
2015-01-01
We consider the optimal dividends problem for a company whose cash reserves follow a general LÃ©vy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655
Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)
2002-01-01
We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.
Another hybrid conjugate gradient algorithm for unconstrained optimization
NASA Astrophysics Data System (ADS)
Andrei, Neculai
2008-02-01
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter ? k is computed as a convex combination of beta ^{{HS}}_{k} (Hestenes-Stiefel) and beta ^{{DY}}_{k} (Dai-Yuan) algorithms, i.eE beta ^{C}_{k} = {left( {1 - theta _{k} } right)}beta ^{{HS}}_{k} + theta _{k} beta ^{{DY}}_{k} . The parameter ? k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation nabla ^{2} f{left( {x_{{k + 1}} } right)}s_{k} = y_{k} , where s_{k} = x_{{k + 1}} - x_{k} and y_{k} = g_{{k + 1}} - g_{k} . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
NASA Technical Reports Server (NTRS)
Olariu, S.; Schwing, J.; Zhang, J.
1991-01-01
A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.
Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; Chen, Guo; He, Xing
2016-02-01
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark. PMID:26595931
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Cai, Ailong; Wang, Linyuan; Yan, Bin; Li, Lei; Zhang, Hanming; Hu, Guoen
2015-10-01
An efficient iterative algorithm, based on recent work in non-convex optimization and generalized p-shrinkage mappings, is proposed for volume image reconstruction from circular cone-beam scans. Conventional total variation regularization makes use of L1 norm of gradient magnitude images (GMI). However, this paper utilizes a generalized penalty function, induced by p-shrinkage, of GMI which is proven to be a better measurement of its sparsity. The reconstruction model is formed using generalized total p-variation (TpV) minimization, which differs with the state of the art methods, with the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. Theoretically, the proximal mapping for penalty functions induced by p-shrinkage has an exact and closed-form expression; thus, the constrained optimization can be stably and efficiently solved by the alternating direction minimization (ADM) scheme. Each sub-problem decoupled by variable splitting is minimized by explicit and easy-to-implement formulas developed by ADM. The proposed algorithm is efficiently implemented using a graphics processing unit and is referred to as "TpV-ADM." This method is robust and accurate even for very few view reconstruction datasets. Verifications and comparisons performed using various datasets (including ideal, noisy, and real projections) illustrate that the proposed method is effective and promising. PMID:26233922
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Parallel Selective Algorithms for Nonconvex Big Data Optimization
NASA Astrophysics Data System (ADS)
Facchinei, Francisco; Scutari, Gesualdo; Sagratella, Simone
2015-04-01
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss- Seidel (i.e., sequential) ones, as well as virtually all possibilities "in between" with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.
Simulation of stochastic systems via polynomial chaos expansions and convex optimization
NASA Astrophysics Data System (ADS)
Fagiano, Lorenzo; Khammash, Mustafa
2012-09-01
Polynomial chaos expansions represent a powerful tool to simulate stochastic models of dynamical systems. Yet, deriving the expansion's coefficients for complex systems might require a significant and nontrivial manipulation of the model, or the computation of large numbers of simulation runs, rendering the approach too time consuming and impracticable for applications with more than a handful of random variables. We introduce a computationally tractable technique for computing the coefficients of polynomial chaos expansions. The approach exploits a regularization technique with a particular choice of weighting matrices, which allows to take into account the specific features of polynomial chaos expansions. The method, completely based on convex optimization, can be applied to problems with a large number of random variables and uses a modest number of Monte Carlo simulations, while avoiding model manipulations. Additional information on the stochastic process, when available, can be also incorporated in the approach by means of convex constraints. We show the effectiveness of the proposed technique in three applications in diverse fields, including the analysis of a nonlinear electric circuit, a chaotic model of organizational behavior, and finally a chemical oscillator.
Semard, Gaëlle; Peulon-Agasse, Valerie; Bruchet, Auguste; Bouillon, Jean-Philippe; Cardinaël, Pascal
2010-08-13
It is important to develop methods of optimizing the selection of column sets and operating conditions for comprehensive two-dimensional gas chromatography. A new method for the calculation of the percentage of separation space used was developed using Delaunay's triangulation algorithms (convex hull). This approach was compared with an existing method and showed better precision and accuracy. It was successfully applied to the selection of the most convenient column set and the geometrical parameters of second column for the analysis of 49 target compounds in wastewater. PMID:20633886
On the optimality of the neighbor-joining algorithm
Eickmeyer, Kord; Huggins, Peter; Pachter, Lior; Yoshida, Ruriko
2008-01-01
The popular neighbor-joining (NJ) algorithm used in phylogenetics is a greedy algorithm for finding the balanced minimum evolution (BME) tree associated to a dissimilarity map. From this point of view, NJ is "optimal" when the algorithm outputs the tree which minimizes the balanced minimum evolution criterion. We use the fact that the NJ tree topology and the BME tree topology are determined by polyhedral subdivisions of the spaces of dissimilarity maps R+(n2) to study the optimality of the neighbor-joining algorithm. In particular, we investigate and compare the polyhedral subdivisions for n â‰¤ 8. This requires the measurement of volumes of spherical polytopes in high dimension, which we obtain using a combination of Monte Carlo methods and polyhedral algorithms. Our results include a demonstration that highly unrelated trees can be co-optimal in BME reconstruction, and that NJ regions are not convex. We obtain the l2 radius for neighbor-joining for n = 5 and we conjecture that the ability of the neighbor-joining algorithm to recover the BME tree depends on the diameter of the BME tree. PMID:18447942
Developing learning algorithms via optimized discretization of continuous dynamical systems.
Tao, Qing; Sun, Zhengya; Kong, Kang
2012-02-01
Most of the existing numerical optimization methods are based upon a discretization of some ordinary differential equations. In order to solve some convex and smooth optimization problems coming from machine learning, in this paper, we develop efficient batch and online algorithms based on a new principle, i.e., the optimized discretization of continuous dynamical systems (ODCDSs). First, a batch learning projected gradient dynamical system with Lyapunov's stability and monotonic property is introduced, and its dynamical behavior guarantees the accuracy of discretization-based optimizer and applicability of line search strategy. Furthermore, under fair assumptions, a new online learning algorithm achieving regret O(?T) or O(logT) is obtained. By using the line search strategy, the proposed batch learning ODCDS exhibits insensitivity to the step sizes and faster decrease. With only a small number of line search steps, the proposed stochastic algorithm shows sufficient stability and approximate optimality. Experimental results demonstrate the correctness of our theoretical analysis and efficiency of our algorithms. PMID:21880573
Optimizing the quantum adiabatic algorithm
NASA Astrophysics Data System (ADS)
Hu, Hongye; Wu, Biao
2016-01-01
In the quantum adiabatic algorithm, as the adiabatic parameter s (t ) changes slowly from zero to one with finite rate, a transition to excited states inevitably occurs and this induces an intrinsic computational error. We show that this computational error depends not only on the total computation time T but also on the time derivatives of the adiabatic parameter s (t ) at the beginning and the end of evolution. Previous work [A. T. Rezakhani, A. K. Pimachev, and D. A. Lidar, Phys. Rev. A 82, 052305 (2010), 10.1103/PhysRevA.82.052305] also suggested this result. With six typical paths, we systematically demonstrate how to optimally design an adiabatic path to reduce the computational errors. Our method has a clear physical picture and also explains the pattern of computational error. In this paper we focus on the quantum adiabatic search algorithm although our results are general.
Nonconvex network optimization: Algorithms and software
Lamar, B.
1994-12-31
Although very efficient solution methods exist for linear and convex network optimization problems, minimum cost network flow problems with concave arc cost functions are challenging because the determination of the optimal solution requires, in the worst case, an evaluation of all the extreme points in the feasible region. Even more challenging, are network flow problems whose arc costs are neither concave nor convex as is the case for problems with price breaks or all-unit discounting. Yet, such situations arise frequently in many real-world problems. In this talk, solution methods for concave cost network flow problems will be reviewed and a computer software package will be presented. In addition, a method for converting networks with arbitrary arc costs into a pure concave cost network will be described.
Resistive Network Optimal Power Flow: Uniqueness and Algorithms
Tan, CW; Cai, DWH; Lou, X
2015-01-01
The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.
The Optimal Solution of a Non-Convex State-Dependent LQR Problem and Its Applications
Xu, Xudan; Zhu, J. Jim; Zhang, Ping
2014-01-01
This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting . It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting , in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions. PMID:24747417
Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Department of Surgery , Weill Medical College, Cornell University, New York, New York 10065; Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 ; Shen, Dinggang
2014-04-15
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
NASA Astrophysics Data System (ADS)
Chen, Shibing; Wang, Xu-Jia
2016-01-01
In this paper we prove the strict c-convexity and the C 1, ? regularity for potential functions in optimal transportation under condition (A3w). These results were obtained by Caffarelli [1,3,4] for the cost c (x, y) =| x - y | 2, by Liu [11], Loeper [15], Trudinger and Wang [20] for costs satisfying the condition (A3). For costs satisfying the condition (A3w), the results have also been proved by Figalli, Kim, and McCann [6], assuming that the initial and target domains are uniformly c-convex, see also [21]; and by Guillen and Kitagawa [8], assuming the cost function satisfies A3w in larger domains. In this paper we prove the strict c-convexity and the C 1, ? regularity assuming either the support of source density is compactly contained in a larger domain where the cost function satisfies A3w, or the dimension 2 ? n ? 4.
Continuous numerical algorithm for a class of combinatorial optimization problems
Cao, Jia-Ming
1994-12-31
It is well known that many optimization problems become very hard because discrete constraints of variables are introduced. For combinatorial optimization problems, almost present algorithms find optimal solution in a discrete set and are usually complicated (the complexity is exponential in time). We consider a class of combinatorial optimization problems including TSP, max-cut problem, k-coloring problem (4-coloring problem), etc.; all these problems are known as NP-complete. At first, a unifying 0-1 quadratic programming model is constructed to formulate above problems. This model`s constraints are very special and separable. For this model we have obtained an equivalence between the discrete model and its relaxed problem in the sense of global or local minimum; this equivalence guarantees that a 0-1 solution will be obtained in a simple constructing process by use of a continuous global or local minimum. Thus, those combinatorial optimization problems can be solved by being converted into a special non-convex quadratic programming. By use of some special properties of this model, a necessary and sufficient condition for local minimum of this model is given. Secondly, the well-known linear programming approximate algorithm is quoted and is verified to converge to one local minimum certainly (this algorithm is well known to converge only on K-T point but not local minimum certainly for the general case). The corresponding linear programming is very easy because the constraints are separable. Finally, several class examining problems are constructed to test the algorithm for all above combinatorial optimization problems, and sufficient computational tests including solving examining problems and comparing with other well-known algorithms are reported. The computational tests show the effectiveness of this algorithm. For example, a k-coloring (k {double_dagger} 4) problem with 1000 nodes can be easily solved on microcomputer (COMPAQ 386/25e) in 15 minutes.
Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui
2014-09-01
Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. PMID:24832358
Ça?lar, F; Ozbek, I Y
2012-01-01
Heart sound localization in chest sound is an essential part for many heart sound cancellation algorithms. The main difficulty for heart sound localization methods is the precise determination of the onset and offset boundaries of the heart sound segment. This paper presents a novel method to estimate lower and upper bounds for the onset and offset of the heart sound segment, which can be used as anchor points for more precise estimation. For this purpose, first chest sound is divided into frames and then entropy and smoothed entropy features of these frames are extracted, and used in the Convex-hull algorithm to estimate the upper and lower bounds for heart sound boundaries. The Convex-hull algorithm constructs a special type of envelope function for entropy features and if the maximal difference between the envelope function and the entropy is larger than a certain threshold, this point is considered as a heart sound bound. The results of the proposed method are compared with a baseline method which is a modified version of a well-known heart sound localization method. The results show that the proposed method outperforms the baseline method in terms of accuracy and detection error rate. Also, the experimental results show that smoothing entropy features significantly improves the performance of both baseline and proposed methods. PMID:23366867
Automatic Segmentation of Neonatal Images Using Convex Optimization and Coupled Level Sets
Wang, Li; Shi, Feng; Lin, Weili; Gilmore, John H.; Shen, Dinggang
2011-01-01
Accurate segmentation of neonatal brain MR images remains challenging mainly due to their poor spatial resolution, inverted contrast between white matter and gray matter, and high intensity inhomogeneity. Most existing methods for neonatal brain segmentation are atlas-based and voxel-wise. Although active contour/surface models with geometric information constraint have been successfully applied to adult brain segmentation, they are not fully explored in the neonatal image segmentation. In this paper, we propose a novel neonatal image segmentation method by combining local intensity information, atlas spatial prior, and cortical thickness constraint in a single level-set framework. Besides, we also provide a robust and reliable tissue surface initialization for the proposed method by using a convex optimization technique. Thus, tissue segmentation, as well as inner and outer cortical surface reconstruction, can be obtained simultaneously. The proposed method has been tested on a large neonatal dataset, and the validation on 10 neonatal brain images (with manual segmentations) shows very promising results. PMID:21763443
Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants
NASA Astrophysics Data System (ADS)
CÃ³rcoles, Juan; Zastrow, Earl; Kuster, Niels
2015-09-01
Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patientâ€™s anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implantâ€™s RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B1+ field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patientâ€™s anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.
Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants.
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2015-09-21
Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient's anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant's RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B(1)(+) field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient's anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty. PMID:26350025
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on-board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles.
An Optimal Class Association Rule Algorithm
NASA Astrophysics Data System (ADS)
Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie
Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.
Intelligent perturbation algorithms to space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.
NASA Astrophysics Data System (ADS)
Bredies, Kristian
2009-01-01
We consider the task of computing an approximate minimizer of the sum of a smooth and a non-smooth convex functional, respectively, in Banach space. Motivated by the classical forward-backward splitting method for the subgradients in Hilbert space, we propose a generalization which involves the iterative solution of simpler subproblems. Descent and convergence properties of this new algorithm are studied. Furthermore, the results are applied to the minimization of Tikhonov-functionals associated with linear inverse problems and semi-norm penalization in Banach spaces. With the help of Bregman-Taylor-distance estimates, rates of convergence for the forward-backward splitting procedure are obtained. Examples which demonstrate the applicability are given, in particular, a generalization of the iterative soft-thresholding method by Daubechies, Defrise and De Mol to Banach spaces as well as total-variation-based image restoration in higher dimensions are presented.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
A comprehensive review of swarm optimization algorithms.
Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60â€™s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-09-15
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning
Chen, Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.
2010-01-01
Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK’s interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization. PMID:20964213
Models for optimal harvest with convex function of growth rate of a population
Lyashenko, O.I.
1995-12-10
Two models for growth of a population, which are described by a Cauchy problem for an ordinary differential equation with right-hand side depending on the population size and time, are investigated. The first model is time-discrete, i.e., the moments of harvest are fixed and discrete. The second model is time-continuous, i.e., a crop is harvested continuously in time. For autonomous systems, the second model is a particular case of the variational model for optimal control with constraints investigated in. However, the prerequisites and the method of investigation are somewhat different, for they are based on Lemma 1 presented below. In this paper, the existence and uniqueness theorem for the solution of the discrete and continuous problems of optimal harvest is proved, and the corresponding algorithms are presented. The results obtained are illustrated by a model for growth of the light-requiring green alga Chlorella.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Aerodynamic Shape Optimization using an Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Hoist, Terry L.; Pulliam, Thomas H.
2003-01-01
A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.
Algorithms for optimizing hydropower system operation
Grygier, J.C.; Stedinger, J.R.
1985-01-01
Successive liner programming, an optimal control algorithm, and a combination of linear programming and dynamic programming (LP-DP) are employed to optimize the operation of multireservoir hydrosystems given a deterministic inflow forecast. The algorithm maximize the value of energy produced at on-peak rates, plus the estimated value of water remaining in storage at the end of the 12-month planning period. The LP-DP algorithm is clearly dominated: it takes longer to find a solution and produces significantly less hydropower than the other two procedures. Successive linear programming (SLP) appears to find the global maximum and is easily implemented. For simple systems the optimal control algorithm finds the optimum in about one fifth the time required by SLP but is harder to implement. Computing costs for a two-reservoir, 12-month deterministic problem averaged about seven cents per run using optimal control and 37 cents using successive linear programming.
Adaptive Cuckoo Search Algorithm for Unconstrained Optimization
2014-01-01
Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
Evolutionary Algorithm for Optimal Vaccination Scheme
NASA Astrophysics Data System (ADS)
Parousis-Orthodoxou, K. J.; Vlachos, D. S.
2014-03-01
The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.
Solving ptychography with a convex relaxation
NASA Astrophysics Data System (ADS)
Horstmeyer, Roarke; Chen, Richard Y.; Ou, Xiaoze; Ames, Brendan; Tropp, Joel A.; Yang, Changhuei
2015-05-01
Ptychography is a powerful computational imaging technique that transforms a collection of low-resolution images into a high-resolution sample reconstruction. Unfortunately, algorithms that currently solve this reconstruction problem lack stability, robustness, and theoretical guarantees. Recently, convex optimization algorithms have improved the accuracy and reliability of several related reconstruction efforts. This paper proposes a convex formulation of the ptychography problem. This formulation has no local minima, it can be solved using a wide range of algorithms, it can incorporate appropriate noise models, and it can include multiple a priori constraints. The paper considers a specific algorithm, based on low-rank factorization, whose runtime and memory usage are near-linear in the size of the output image. Experiments demonstrate that this approach offers a 25% lower background variance on average than alternating projections, the ptychographic reconstruction algorithm that is currently in widespread use.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Algorithms for optimal dyadic decision trees
Hush, Don; Porter, Reid
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
Optimization with Fuzzy Data via Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Kosi?ski, Witold
2010-09-01
Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
New algorithms for binary wavefront optimization
NASA Astrophysics Data System (ADS)
Zhang, Xiaolong; Kner, Peter
2015-03-01
Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of ?2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.
Feature Selection via Modified Gravitational Optimization Algorithm
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel
2015-03-01
Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.
A Cuckoo Search Algorithm for Multimodal Optimization
2014-01-01
Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850
A novel bee swarm optimization algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush
2010-10-01
The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.
BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster
NASA Astrophysics Data System (ADS)
Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi
2007-12-01
This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.
Source optimization using particle swarm optimization algorithm in photolithography
NASA Astrophysics Data System (ADS)
Wang, Lei; Li, Sikun; Wang, Xiangzhao; Yan, Guanyong; Yang, Chaoxing
2015-03-01
In recent years, with the availability of freeform sources, source optimization has emerged as one of the key techniques for achieving higher resolution without increasing the complexity of mask design. In this paper, an efficient source optimization approach using particle swarm optimization algorithm is proposed. The sources are represented by pixels and encoded into particles. The pattern fidelity is adopted as the fitness function to evaluate these particles. The source optimization approach is implemented by updating the velocities and positions of these particles. The approach is demonstrated by using two typical mask patterns, including a periodic array of contact holes and a vertical line/space design. The pattern errors are reduced by 66.1% and 39.3% respectively. Compared with the source optimization approach using genetic algorithm, the proposed approach leads to faster convergence while improving the image quality at the same time. The robustness of the proposed approach to initial sources is also verified.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
Protein structure optimization with a "Lamarckian" ant colony algorithm.
Oakley, Mark T; Richardson, E Grace; Carr, Harriet; Johnston, Roy L
2013-01-01
We describe the LamarckiAnt algorithm: a search algorithm that combines the features of a "Lamarckian" genetic algorithm and ant colony optimization. We have implemented this algorithm for the optimization of BLN model proteins, which have frustrated energy landscapes and represent a challenge for global optimization algorithms. We demonstrate that LamarckiAnt performs competitively with other state-of-the-art optimization algorithms. PMID:24407312
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
Automated segmentation of CBCT image using spiral CT atlases and convex optimization.
Wang, Li; Chen, Ken Chung; Shi, Feng; Liao, Shu; Li, Gang; Gao, Yaozong; Shen, Steve G F; Yan, Jin; Lee, Philip K M; Chow, Ben; Liu, Nancy X; Xia, James J; Shen, Dinggang
2013-01-01
Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. CBCT scans have relatively low cost and low radiation dose in comparison to conventional spiral CT scans. However, a major limitation of CBCT scans is the widespread image artifacts such as noise, beam hardening and inhomogeneity, causing great difficulties for accurate segmentation of bony structures from soft tissues, as well as separating mandible from maxilla. In this paper, we presented a novel fully automated method for CBCT image segmentation. In this method, we first estimated a patient-specific atlas using a sparse label fusion strategy from predefined spiral CT atlases. This patient-specific atlas was then integrated into a convex segmentation framework based on maximum a posteriori probability for accurate segmentation. Finally, the performance of our method was validated via comparisons with manual ground-truth segmentations. PMID:24505768
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (ESTSC)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,moreÂ Â» gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.Â«Â less
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
Rminimax: An Optimally Randomized MINIMAX Algorithm.
DÃez, Silvia GarcÃa; Laforge, JÃ©rÃ´me; Saerens, Marco
2013-02-01
This paper proposes a simple extension of the celebrated MINIMAX algorithm used in zero-sum two-player games, called Rminimax. The Rminimax algorithm allows controlling the strength of an artificial rival by randomizing its strategy in an optimal way. In particular, the randomized shortest-path framework is applied for biasing the artificial intelligence (AI) adversary toward worse or better solutions, therefore controlling its strength. In other words, our model aims at introducing/implementing bounded rationality to the MINIMAX algorithm. This framework takes into account all possible strategies by computing an optimal tradeoff between exploration (quantified by the entropy spread in the tree) and exploitation (quantified by the expected cost to an end game) of the game tree. As opposed to other tree-exploration techniques, this new algorithm considers complete paths of a tree (strategies) where a given entropy is spread. The optimal randomized strategy is efficiently computed by means of a simple recurrence relation while keeping the same complexity as the original MINIMAX. As a result, the Rminimax implements a nondeterministic strength-adapted AI opponent for board games in a principled way, thus avoiding the assumption of complete rationality. Simulations on two common games show that Rminimax behaves as expected. PMID:22893439
A reliable algorithm for optimal control synthesis
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1992-01-01
In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.
Model results of optimized convex shapes for a solar thermal rocket thruster
Cartier, S.L.
1995-11-01
A computational, 3-D model for evaluating the performance of solar thermal thrusters is under development. The model combines Monte-Carlo and ray-tracing techniques to follow the ray paths of concentrated solar radiation through an axially symmetric heat-exchanger surface for both convex and concave cavity shapes. The enthalpy of a propellant, typically hydrogen gas, increases as it flows over the outer surface of the absorber/exchanger cavity. Surface temperatures are determined by the requirement that the input radiant power to surface elements balance with the reradiated power and heat conducted to the propellant. The model uses tabulated forms of surface emissivity and gas enthalpy. Temperature profiles result by iteratively calculating surface and propellant temperatures until the solutions converge to stable values. The model provides a means to determine the effectiveness of incorporating a secondary concentrator into the heat-exchanger cavity. A secondary concentrator increases the amount of radiant energy entering the cavity. The model will be used to evaluate the data obtained from upcoming experiments. Characteristics of some absorber/exchanger cavity shapes combined with optionally attached conical secondary concentrators for various propellant flow rates are presented. In addition, shapes that recover some of the diffuse radiant energy which would otherwise not enter the secondary concentrator are considered.
Heuristic Kalman algorithm for solving optimization problems.
Toscano, Rosario; Lyonnet, Patrick
2009-10-01
The main objective of this paper is to present a new optimization approach, which we call heuristic Kalman algorithm (HKA). We propose it as a viable approach for solving continuous nonconvex optimization problems. The principle of the proposed approach is to consider explicitly the optimization problem as a measurement process designed to produce an estimate of the optimum. A specific procedure, based on the Kalman method, was developed to improve the quality of the estimate obtained through the measurement process. The efficiency of HKA is evaluated in detail through several nonconvex test problems, both in the unconstrained and constrained cases. The results are then compared to those obtained via other metaheuristics. These various numerical experiments show that the HKA has very interesting potentialities for solving nonconvex optimization problems, notably concerning the computation time and the success ratio. PMID:19336312
FOGSAA: Fast Optimal Global Sequence Alignment Algorithm
NASA Astrophysics Data System (ADS)
Chakraborty, Angana; Bandyopadhyay, Sanghamitra
2013-04-01
In this article we propose a Fast Optimal Global Sequence Alignment Algorithm, FOGSAA, which aligns a pair of nucleotide/protein sequences faster than any optimal global alignment method including the widely used Needleman-Wunsch (NW) algorithm. FOGSAA is applicable for all types of sequences, with any scoring scheme, and with or without affine gap penalty. Compared to NW, FOGSAA achieves a time gain of (70-90)% for highly similar nucleotide sequences (> 80% similarity), and (54-70)% for sequences having (30-80)% similarity. For other sequences, it terminates with an approximate score. For protein sequences, the average time gain is between (25-40)%. Compared to three heuristic global alignment methods, the quality of alignment is improved by about 23%-53%. FOGSAA is, in general, suitable for aligning any two sequences defined over a finite alphabet set, where the quality of the global alignment is of supreme importance.
Genetic algorithm optimization of atomic clusters
Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E. |
1996-12-31
The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process.
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.
Multidisciplinary design optimization using genetic algorithms
NASA Astrophysics Data System (ADS)
Unal, Resit
1994-12-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2010-12-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Canonical Analysis of Two Convex Polyhedral Cones and Applications.
ERIC Educational Resources Information Center
Tenenhaus, Michel
1988-01-01
Canonical analysis of two convex polyhedral cones involves looking for two vectors whose square cosine is a maximum. New results about the properties of the optimal solution to this problem are presented. The convergence of an alternating least squares algorithm and properties of limits of calculated sequences are discussed. (SLD)
Algorithms for optimizing CT fluence control
NASA Astrophysics Data System (ADS)
Hsieh, Scott S.; Pelc, Norbert J.
2014-03-01
The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).
Regression model based on convex combinations best correlated with response
NASA Astrophysics Data System (ADS)
Dokukin, A. A.; Senko, O. V.
2015-03-01
A new regression method based on constructing optimal convex combinations of simple linear regressions of the least squares method (LSM regressions) built from original regressors is presented. It is shown that, in fact, this regression method is equivalent to a modification of the LSM including the additional requirement of the coincidence of the sign of the regression parameter with that of the correlation coefficient between the corresponding regressor and the response. A method for constructing optimal convex combinations based on the concept of nonexpandable irreducible ensembles is described. Results of experiments comparing the developed method with the known glmnet algorithm are presented, which confirm the efficiency of the former.
NASA Astrophysics Data System (ADS)
Sidky, Emil Y.; Reiser, Ingrid; Nishikawa, Robert M.; Pan, Xiaochuan; Chartrand, Rick; Kopans, Daniel B.; Moore, Richard H.
2008-03-01
Digital breast tomosynthesis (DBT) is a rapidly developing imaging modality that gives some tomographic information for breast cancer screening. The effectiveness of standard mammography can be limited by the presence of overlapping structures in the breast. A DBT scan, consisting of a limited number of views covering a limited arc projecting the breast onto a fixed flat-panel detector, involves only a small modification of digital mammography, yet DBT yields breast image slices with reduced interference from overlapping breast tissues. We have recently developed an iterative image reconstruction algorithm for DBT based on image total variation (TV) minimization that improves on EM in that the resulting images have fewer artifacts and there is no need for additional regularization. In this abstract, we present the total p-norm variation (TpV) image reconstruction algorithm. TpV has the advantages of our previous TV algorithm, while improving substantially on the efficiency. Results for the TpV on clinical data are shown and compared with EM.
NASA Astrophysics Data System (ADS)
Kim, Bona; Jeong, Soocheol; Byun, Joongmoo
2015-07-01
In recent years, many studies have been performed to reconstruct traces missing from irregularly undersampled seismic data. In this paper, we introduce a new curvelet-transform-based projection onto convex sets (POCS) algorithm that applies the curvelet transform to 2D Fourier-transformed data in the f-k domain instead of data in the t-x domain for each iteration of the POCS algorithm. To verify the efficiency of the suggested method, it was applied to synthetic data generated using the Marmousi2 and Hess vertically transverse isotropy (VTI) models. The results clearly demonstrate that the presented algorithm, which applies the curvelet transform to data in the f-k domain, is superior to conventional POCS and curvelet-transform-based POCS in the t-x domain, especially for reconstructing events with diverse directions and various amplitudes.
Lahanas, M; Baltas, D; Zamboglou, N
2003-02-01
Multiple objectives must be considered in anatomy-based dose optimization for high-dose-rate brachytherapy and a large number of parameters must be optimized to satisfy often competing objectives. For objectives expressed solely in terms of dose variances, deterministic gradient-based algorithms can be applied and a weighted sum approach is able to produce a representative set of non-dominated solutions. As the number of objectives increases, or non-convex objectives are used, local minima can be present and deterministic or stochastic algorithms such as simulated annealing either cannot be used or are not efficient. In this case we employ a modified hybrid version of the multi-objective optimization algorithm NSGA-II. This, in combination with the deterministic optimization algorithm, produces a representative sample of the Pareto set. This algorithm can be used with any kind of objectives, including non-convex, and does not require artificial importance factors. A representation of the trade-off surface can be obtained with more than 1000 non-dominated solutions in 2-5 min. An analysis of the solutions provides information on the possibilities available using these objectives. Simple decision making tools allow the selection of a solution that provides a best fit for the clinical goals. We show an example with a prostate implant and compare results obtained by variance and dose-volume histogram (DVH) based objectives. PMID:12608615
Urban drain layout optimization using PBIL algorithm
NASA Astrophysics Data System (ADS)
Wan, Shanshan; Hao, Ying; Qiu, Dongwei; Zhao, Xu
2008-10-01
Strengthen the environmental protection is one of the basic national policies in China. The optimization of urban drain layout plays an important role to the protection of water ecosystem and urban environment. The paper puts forward a method to properly locate urban drain using population based incremental learning (PBIL) algorithm. The main factors such as regional containing sewage capacity, sewage disposal capacity quantity limit of drains within specific area are considered as constraint conditions. Analytic hierarchy process is used to obtain weight of each factor, and spatial analysis of environmental influencing factors is carried on Based on GIS. Penalty function method is put forward to model the problem and object function is to guarantee economy benefit. The algorithm is applied to the drain layout engineering of Nansha District, Guangzhou City, China. The drain layout obtained though PBIL algorithm excels traditional method and it can protect the urban environment more efficiently and ensure the healthy development of water ecosystem more successfully. The result has also proved that PBIL algorithm is a good method in solving this question because of its robust performance and stability which supplied strong technologic support to the sustainable development of environment.
Intervals in evolutionary algorithms for global optimization
Patil, R.B.
1995-05-01
Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.
NASA Astrophysics Data System (ADS)
Chang, J.; Nakshatrala, K.
2014-12-01
It is well know that the standard finite element methods, in general, do not satisfy element-wise mass/species balance properties. It is, however, desirable to have element-wide mass balance property in subsurface modeling. Several studies over the years have aimed to overcome this drawback of finite element formulations. Currently, a post-processing optimization-based methodology is commonly employed to recover the local mass balance for porous media models. However, such a post-processing technique does not respect the underlying variational structure that the finite element formulation may enjoy. Motivated by this, a consistent methodology to satisfy element-wise local mass balance for porous media models is constructed using convex optimization techniques. The assembled system of global equations is reconstructed into a quadratic programming problem subjected to bounded equality constraints that ensure conservation at the element level. The proposed methodology can be applied to any computational mesh and to any non-locally conservative nodal-based finite element method. Herein, we integrate our proposed methodology into the framework of the classical mixed Galerkin formulation using Taylor-Hood elements and the least-squares finite element formulation. Our numerical studies will include computational cost, numerical convergence, and comparision with popular methods. In particular, it will be shown that the accuracy of the solutions is comparable with that of several popular locally conservative finite element formulations like the lowest order Raviart-Thomas formulation. We believe the proposed optimization-based approach is a viable approach to preserve local mass balance on general computational grids and is amenable for large-scale parallel implementation.
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
On convexity of H-infinity Riccati solutions
NASA Technical Reports Server (NTRS)
Li, X. P.; Chang, B. C.
1991-01-01
The authors revealed several important eigen properties of the stabilizing solutions of the two H-infinity Riccati equations and their product. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H-infinity norm. Two examples are used to illustrate the algorithms.
Optimizing the pre-decoding algorithm
NASA Astrophysics Data System (ADS)
He, Jun; Han, Song; Han, Ziqiang
2007-03-01
Considering the coding-excitation technology applied in ultrasonic systems, pre-decoding by multi-center before beam-synthesis is recognized as the best method for decoding. Compared with the method of decoding after synthesizing, the former avoids the inferior quality of side-lobe performance invited by beam-synthesis (the attenuation is more than 15dB). However, it is restricted by its great requirement to hardware cost resources so that pre-decoding method couldn't be made the most of in practice. In order to resolve the practical issue, this paper advances a set of project to retrench hardware cost by optimizing the decoding algorithm in theory. The resulting data based on Golay code with Quartus II validates the validity and feasibility of this project.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
Optimality of the neighbor joining algorithm and faces of the balanced minimum evolution polytope.
Haws, David C; Hodge, Terrell L; Yoshida, Ruriko
2011-11-01
Balanced minimum evolution (BME) is a statistically consistent distance-based method to reconstruct a phylogenetic tree from an alignment of molecular data. In 2000, Pauplin showed that the BME method is equivalent to optimizing a linear functional over the BME polytope, the convex hull of the BME vectors obtained from Pauplin's formula applied to all binary trees. The BME method is related to the Neighbor Joining (NJ) Algorithm, now known to be a greedy optimization of the BME principle. Further, the NJ and BME algorithms have been studied previously to understand when the NJ Algorithm returns a BME tree for small numbers of taxa. In this paper we aim to elucidate the structure of the BME polytope and strengthen knowledge of the connection between the BME method and NJ Algorithm. We first prove that any subtree-prune-regraft move from a binary tree to another binary tree corresponds to an edge of the BME polytope. Moreover, we describe an entire family of faces parameterized by disjoint clades. We show that these clade-faces are smaller dimensional BME polytopes themselves. Finally, we show that for any order of joining nodes to form a tree, there exists an associated distance matrix (i.e., dissimilarity map) for which the NJ Algorithm returns the BME tree. More strongly, we show that the BME cone and every NJ cone associated to a tree T have an intersection of positive measure. PMID:21373975
Modified artificial bee colony algorithm for reactive power optimization
NASA Astrophysics Data System (ADS)
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-05-01
Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.
Genetic algorithm and particle swarm optimization combined with Powell method
NASA Astrophysics Data System (ADS)
Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui
2013-10-01
In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.
Genetic Algorithm Based Neural Networks for Nonlinear Optimization
1994-09-28
This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search of the energy surface.
Optimal Pid Controller Design Using Adaptive Vurpso Algorithm
NASA Astrophysics Data System (ADS)
Zirkohi, Majid Moradi
2015-04-01
The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
CONVEX_HULL—A pascal program for determining the convex hull for planar sets
NASA Astrophysics Data System (ADS)
Yamamoto, Jorge Kazuo
1997-08-01
Computer aided graphical display of geological data is usually based on a regular grid, interpolated from a scattered data set. However, the interpolation function is valid only inside the domain of sampling points, or a closed boundary which limits all the sampling points. This closed boundary, named convex hull, can be determined with the aid of an algorithm. The convex hull of a planar set of points is defined as the minimum area convex polygon containing all the points. This paper presents a review of current methods for determining the convex hull, and the computer program CONVEX_HULL, written in Pascal language and based on a new algorithm.
Honey Bees Inspired Optimization Method: The Bees Algorithm.
Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-01-01
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528
Honey Bees Inspired Optimization Method: The Bees Algorithm
Yuce, Baris; Packianather, Michael S.; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-01-01
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
A Modified BFGS Formula Using a Trust Region Model for Nonsmooth Convex Minimizations
Cui, Zengru; Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie; Wang, Xiaoliang; Duan, Xiabin
2015-01-01
This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than using second-order information of the function, thus decreasing the workload and time involved in the computation. Under suitable conditions, the algorithm converges globally to an optimal solution. Numerical results show that this algorithm can successfully solve nonsmooth unconstrained convex problems. PMID:26501775
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Engineering local optimality in quantum Monte Carlo algorithms
Pollet, Lode . E-mail: pollet@itp.phys.ethz.ch; Houcke, Kris Van; Rombouts, Stefan M.A.
2007-08-10
Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin-S models.
Convex-relaxed kernel mapping for image segmentation.
Ben Salah, Mohamed; Ben Ayed, Ismail; Jing Yuan; Hong Zhang
2014-03-01
This paper investigates a convex-relaxed kernel mapping formulation of image segmentation. We optimize, under some partition constraints, a functional containing two characteristic terms: 1) a data term, which maps the observation space to a higher (possibly infinite) dimensional feature space via a kernel function, thereby evaluating nonlinear distances between the observations and segments parameters and 2) a total-variation term, which favors smooth segment surfaces (or boundaries). The algorithm iterates two steps: 1) a convex-relaxation optimization with respect to the segments by solving an equivalent constrained problem via the augmented Lagrange multiplier method and 2) a convergent fixed-point optimization with respect to the segments parameters. The proposed algorithm can bear with a variety of image types without the need for complex and application-specific statistical modeling, while having the computational benefits of convex relaxation. Our solution is amenable to parallelized implementations on graphics processing units (GPUs) and extends easily to high dimensions. We evaluated the proposed algorithm with several sets of comprehensive experiments and comparisons, including: 1) computational evaluations over 3D medical-imaging examples and high-resolution large-size color photographs, which demonstrate that a parallelized implementation of the proposed method run on a GPU can bring a significant speed-up and 2) accuracy evaluations against five state-of-the-art methods over the Berkeley color-image database and a multimodel synthetic data set, which demonstrates competitive performances of the algorithm. PMID:24723519
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
A New Optimized GA-RBF Neural Network Algorithm
Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666
Genetic-Algorithm Tool For Search And Optimization
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven
1995-01-01
SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504
Selective voting in convex-hull ensembles improves classification accuracy
Kodell, Ralph L.; Zhang, Chuanlei; Siegel, Eric R.; Nagarajan, Radhakrishnan
2011-01-01
Objective Classification algorithms can be used to predict risks and responses of patients based on genomic and other high-dimensional data. While there is optimism for using these algorithms to improve the treatment of diseases, they have yet to demonstrate sufficient predictive ability for routine clinical practice. They generally classify all patients according to the same criteria, under an implicit assumption of population homogeneity. The objective here is to allow for population heterogeneity, possibly unrecognized, in order to increase classification accuracy and further the goal of tailoring therapies on an individualized basis. Methods and materials Anew selective-voting algorithm is developed in the context of a classifier ensemble of two-dimensional convex hulls of positive and negative training samples. Individual classifiers in the ensemble are allowed to vote on test samples only if those samples are located within or behind pruned convex hulls of training samples that define the classifiers. Results Validation of the new algorithm’s increased accuracy is carried out using two publicly available datasets having cancer as the outcome variable and expression levels of thousands of genes as predictors. Selective voting leads to statistically significant increases in accuracy from 86.0% to 89.8% (p < 0.001) and 63.2% to 67.8% (p < 0.003) compared to the original algorithm. Conclusion Selective voting by members of convex-hull classifier ensembles significantly increases classification accuracy compared to one-size-fits-all approaches. PMID:22064044
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic algorithms - What fitness scaling is optimal?
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac
1993-01-01
A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.
Zheng, Wei; Friedman, Alan M; Bailey-Kellogg, Chris
2009-08-01
In engineering protein variants by constructing and screening combinatorial libraries of chimeric proteins, two complementary and competing goals are desired: the new proteins must be similar enough to the evolutionarily-selected wild-type proteins to be stably folded, and they must be different enough to display functional variation. We present here the first method, Staversity, to simultaneously optimize stability and diversity in selecting sets of breakpoint locations for site-directed recombination. Our goal is to uncover all "undominated" breakpoint sets, for which no other breakpoint set is better in both factors. Our first algorithm finds the undominated sets serving as the vertices of the lower envelope of the two-dimensional (stability and diversity) convex hull containing all possible breakpoint sets. Our second algorithm identifies additional breakpoint sets in the concavities that are either undominated or dominated only by undiscovered breakpoint sets within a distance bound computed by the algorithm. Both algorithms are efficient, requiring only time polynomial in the numbers of residues and breakpoints, while characterizing a space defined by an exponential number of possible breakpoint sets. We applied Staversity to identify 2-10 breakpoint plans for different sets of parent proteins taken from the purE family, as well as for parent proteins TEM-1 and PSE-4 from the beta-lactamase family. The average normalized distance between our plans and the lower bound for optimal plans is around 2%. Our plans dominate most (60-90% on average for each parent set) of the plans found by other possible approaches, random sampling or explicit optimization for stability with implicit optimization for diversity. The identified breakpoint sets provide a compact representation of good plans, enabling a protein engineer to understand and account for the trade-offs between two key considerations in combinatorial chimeragenesis. PMID:19645597
Modulus of convexity for operator convex functions
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2014-08-01
Given an operator convex function f(x), we obtain an operator-valued lower bound for cf(x) + (1 - c)f(y) - f(cx + (1 - c)y), c ? [0, 1]. The lower bound is expressed in terms of the matrix Bregman divergence. A similar inequality is shown to be false for functions that are convex but not operator convex.
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)?k ? 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
Evaluation of a particle swarm algorithm for biomechanical optimization.
Schutte, Jaco F; Koh, Byung-Il; Reinbolt, Jeffrey A; Haftka, Raphael T; George, Alan D; Fregly, Benjamin J
2005-06-01
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm's global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms--a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Genetic Algorithm Based Neural Networks for Nonlinear Optimization
Energy Science and Technology Software Center (ESTSC)
1994-09-28
This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmoreÂ Â»the energy surface.Â«Â less
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
Parallel projected variable metric algorithms for unconstrained optimization
NASA Technical Reports Server (NTRS)
Freeman, T. L.
1989-01-01
The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.
An algorithm for the systematic disturbance of optimal rotational solutions
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kaiser, Mary K.
1989-01-01
An algorithm for introducing a systematic rotational disturbance into an optimal (i.e., single axis) rotational trajectory is described. This disturbance introduces a motion vector orthogonal to the quaternion-defined optimal rotation axis. By altering the magnitude of this vector, the degree of non-optimality can be controlled. The metric properties of the distortion parameter are described, with analogies to two-dimensional translational motion. This algorithm was implemented in a motion-control program on a three-dimensional graphic workstation. It supports a series of human performance studies on the detectability of rotational trajectory optimality by naive observers.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
PCB drill path optimization by combinatorial cuckoo search algorithm.
Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm
Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
Wu, Xiaodong
2006-01-01
In this paper, we study several interesting optimal-ratio region detection (ORD) problems in d-D (d ? 3) discrete geometric spaces, which arise in high dimensional medical image segmentation. Given a d-D voxel grid of n cells, two classes of geometric regions that are enclosed by a single or two coupled smooth heighfield surfaces defined on the entire grid domain are considered. The objective functions are normalized by a function of the desired regions, which avoids a bias to produce an overly large or small region resulting from data noise. The normalization functions that we employ are used in real medical image segmentation. To our best knowledge, no previous results on these problems in high dimensions are known. We develop a unified algorithmic framework based on a careful characterization of the intrinsic geometric structures and a nontrivial graph transformation scheme, yielding efficient polynomial time algorithms for solving these ORD problems. Our main ideas include the following. We observe that the optimal solution to the ORD problems can be obtained via the construction of a convex hull for a set of O(n) unknown 2-D points using the hand probing technique. The probing oracles are implemented by computing a minimum s-t cut in a weighted directed graph. The ORD problems are then solved by O(n) calls to the minimum s-t cut algorithm. For the class of regions bounded by a single heighfield surface, our further investigation shows that the O(n) calls to the minimum s-t cut algorithm are on a monotone parametric flow network, which enables to detect the optimal-ratio region in the complexity of computing a single maximum flow. PMID:25414538
ERIC Educational Resources Information Center
Scott, Paul
2006-01-01
A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.
Superscattering of light optimized by a genetic algorithm
NASA Astrophysics Data System (ADS)
Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.
2014-07-01
We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.
Superscattering of light optimized by a genetic algorithm
Mirzaei, Ali Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.
2014-07-07
We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.
Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-LÃ³pez, S.; Portilla-Figueras, J. A.
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
Progress in design optimization using evolutionary algorithms for aerodynamic problems
NASA Astrophysics Data System (ADS)
Lian, Yongsheng; Oyama, Akira; Liou, Meng-Sing
2010-07-01
Evolutionary algorithms (EAs) are useful tools in design optimization. Due to their simplicity, ease of use, and suitability for multi-objective design optimization problems, EAs have been applied to design optimization problems from various areas. In this paper we review the recent progress in design optimization using evolutionary algorithms to solve real-world aerodynamic problems. Examples are given in the design of turbo pump, compressor, and micro-air vehicles. The paper covers the following topics that are deemed important to solve a large optimization problem from a practical viewpoint: (1) hybridized approaches to speed up the convergence rate of EAs; (2) the use of surrogate model to reduce the computational cost stemmed from EAs; (3) reliability based design optimization using EAs; and (4) data mining of Pareto-optimal solutions.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center (ESTSC)
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmoreÂ Â» not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.Â«Â less
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.
Applying new optimization algorithms to more predictive control
Wright, S.J.
1996-03-01
The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.
Optimization of composite structures by estimation of distribution algorithms
NASA Astrophysics Data System (ADS)
Grosset, Laurent
The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.
Dantzig, G.B.
1992-10-01
Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b{sup 0}. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides {cflx b}{sup i}, then redirected in turn to the next {cflx b}{sup i}. Once within the neighborhood of each {cflx b}{sup i}, a weighted sum of the approximate solutions. {bar x}{sup i} yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance {parallel}b - b{sup 0}{parallel}{sub 2} {le} r. The feasible solution is found in less than 4(m+ 1){sup 3}/r{sup 2} iterations. The work per iteration is {delta}mn + 2m + n + 9 multiplications plus {delta}mn + m + n + 9 additions or comparisons where {delta} is the density of nonzero coeffients in the matrix.
Dantzig, G.B.
1992-10-01
Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b[sup 0]. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides [cflx b][sup i], then redirected in turn to the next [cflx b][sup i]. Once within the neighborhood of each [cflx b][sup i], a weighted sum of the approximate solutions. [bar x][sup i] yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance [parallel]b - b[sup 0][parallel][sub 2] [le] r. The feasible solution is found in less than 4(m+ 1)[sup 3]/r[sup 2] iterations. The work per iteration is [delta]mn + 2m + n + 9 multiplications plus [delta]mn + m + n + 9 additions or comparisons where [delta] is the density of nonzero coeffients in the matrix.
Optimal fractional order PID design via Tabu Search based algorithm.
Ate?, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128
Model Specification Searches Using Ant Colony Optimization Algorithms
ERIC Educational Resources Information Center
Marcoulides, George A.; Drezner, Zvi
2003-01-01
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
Integrated genetic algorithm for optimization of space structures
NASA Astrophysics Data System (ADS)
Adeli, Hojjat; Cheng, Nai-Tsang
1993-10-01
Gradient-based mathematical-optimization algorithms usually seek a solution in the neighborhood of the starting point. If more than one local optimum exists, the solution will depend on the choice of the starting point, and the global optimum cannot be found. This paper presents the optimization of space structures by integrating a genetic algorithm with the penalty-function method. Genetic algorithms are inspired by the basic mechanism of natural evolution, and are efficient for global-searches. The technique employs the Darwinian survival-of-the-fittest theory to yield the best or better characters among the old population, and performs a random information exchange to create superior offspring. Different types of crossover operations are used in this paper, and their relative merit is investigated. The integrated genetic algorithm has been implemented in C language and is applied to the optimization of three space truss structures. In each case, an optimum solution was obtained after a limited number of iterations.
Artificial bee colony algorithm for solving optimal power flow problem.
Le Dinh, Luong; Vo Ngoc, Dieu; Vasant, Pandian
2013-01-01
This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem. PMID:24470790
A spectral image clustering algorithm based on ant colony optimization
NASA Astrophysics Data System (ADS)
Ashok, Luca; Messinger, David W.
2012-06-01
Ant Colony Optimization (ACO) is a computational method used for optimization problems. The ACO algorithm uses virtual ants to create candidate solutions that are represented by paths on a mathematical graph. We develop an algorithm using ACO that takes a multispectral image as input and outputs a cluster map denoting a cluster label for each pixel. The algorithm does this through identication of a series of one dimensional manifolds on the spectral data cloud via the ACO approach, and then associates pixels to these paths based on their spectral similarity to the paths. We apply the algorithm to multispectral imagery to divide the pixels into clusters based on their representation by a low dimensional manifold estimated by the best t ant path" through the data cloud. We present results from application of the algorithm to a multispectral Worldview-2 image and show that it produces useful cluster maps.
Artificial Bee Colony Algorithm for Solving Optimal Power Flow Problem
Le Dinh, Luong; Vo Ngoc, Dieu
2013-01-01
This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem. PMID:24470790
A Discrete Lagrangian Algorithm for Optimal Routing Problems
Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.
2008-11-06
The ideas of discrete Lagrangian methods for conservative systems are exploited for the construction of algorithms applicable in optimal ship routing problems. The algorithm presented here is based on the discretisation of Hamilton's principle of stationary action Lagrangian and specifically on the direct discretization of the Lagrange-Hamilton principle for a conservative system. Since, in contrast to the differential equations, the discrete Euler-Lagrange equations serve as constrains for the optimization of a given cost functional, in the present work we utilize this feature in order to minimize the cost function for optimal ship routing.
Structure optimization of neural networks with the A*-algorithm.
Doering, A; Galicki, M; Witte, H
1997-01-01
A method for the construction of optimal structures for feedforward neural networks is introduced. On the basis of a construction of a graph of network structures and an evaluation value which is assigned to each of them, an heuristic search algorithm can be installed on this graph. The application of the A*-algorithm ensures, in theory, both the optimality of the solution and the optimality of the search. For several examples, a comparison between the new strategy and the well-known cascade-correlation procedure is carried out with respect to the performance of the resulting structures. PMID:18255745
OPTIMIZATION OF LONG RURAL FEEDERS USING A GENETIC ALGORITHM
Wishart, Michael; Ledwich, Gerard; Ghosh, Arindam; Ivanovich, Grujica
2010-06-15
This paper describes the optimization of conductor size and the voltage regulator location and magnitude of long rural distribution lines. The optimization minimizes the lifetime cost of the lines, including capital costs and losses while observing voltage drop and operational constraints using a Genetic Algorithm (GA). The GA optimization is applied to a real Single Wire Earth Return (SWER) network in regional Queensland and results are presented.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
A Hybrid Ant Colony Algorithm for Loading Pattern Optimization
NASA Astrophysics Data System (ADS)
Hoareau, F.
2014-06-01
ElectricitÃ© de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.
Pattern search algorithms for mixed variable general constrained optimization problems
NASA Astrophysics Data System (ADS)
Abramson, Mark Aaron
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. The Audet-Dennis Generalized Pattern Search (GPS) algorithm for bound constrained mixed variable optimization problems is extended to problems with general nonlinear constraints by incorporating a filter, in which new iterates are accepted whenever they decrease the incumbent objective function value or constraint violation function value. Additionally, the algorithm can exploit any available derivative information (or rough approximation thereof) to speed convergence without sacrificing the flexibility often employed by GPS methods to find better local optima. In generalizing existing GPS algorithms, the new theoretical convergence results presented here reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are made, a hierarchy of theoretical convergence results is given, in which the assumptions dictate what can be proved about certain limit points of the algorithm. A new Matlab(c) software package was developed to implement these algorithms. Numerical results are provided for several nonlinear optimization problems from the CUTE test set, as well as a difficult nonlinearly constrained mixed variable optimization problem in the design of a load-bearing thermal insulation system used in cryogenic applications.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845
A Genetic Algorithm Approach to Multiple-Response Optimization
Ortiz, Francisco; Simpson, James R.; Pignatiello, Joseph J.; Heredia-Langner, Alejandro
2004-10-01
Many designed experiments require the simultaneous optimization of multiple responses. A common approach is to use a desirability function combined with an optimization algorithm to find the most desirable settings of the controllable factors. However, as the problem grows even moderately in either the number of factors or the number of responses, conventional optimization algorithms can fail to find the global optimum. An alternative approach is to use a heuristic search procedure such as a genetic algorithm (GA). This paper proposes and develops a multiple-response solution technique using a GA in conjunction with an unconstrained desirability function. The GA requires that several parameters be determined in order for the algorithm to operate effectively. We perform a robust designed experiment in order to tune the genetic algorithm to perform well regardless of the complexity of the multiple-response optimization problem. The performance of the proposed GA method is evaluated and compared with the performance of the method that combines the desirability with the generalized reduced gradient (GRG) optimization. The evaluation shows that only the proposed GA approach consistently and effectively solves multiple-response problems of varying complexity.
Optimal design of engine mount using an artificial life algorithm
NASA Astrophysics Data System (ADS)
Ahn, Young Kong; Song, Jin Dae; Yang, Bo-Suk
2003-03-01
When designing fluid mounts, design parameters can be varied in order to obtain a desired notch frequency and notch depth. The notch frequency is a function of the mount parameters and is typically selected by the designer to occur at the vibration disturbance frequency. Since the process of choosing these parameters can involve some trial and error, it seems to be a great application for obtaining optimal performance of the mount. Many combinations of parameters are possible to give us the desired notch frequency, but the question is which combination provides the lowest depth? Therefore, an automatic optimal technique is needed to optimize the fluid mount. In this study, the enhanced artificial life algorithm (EALA) is applied to minimizing transmissibility of a fluid mount at the desired notch frequency, and at the notch and resonant frequencies. The present hybrid algorithm is the synthesis of a conventional artificial life algorithm with the random tabu search (R-tabu) method and then, the time for searching optimal solution could be reduced from the conventional artificial life algorithm and its solution accuracy became better. The results show that the performance of the optimized mount by using the hybrid algorithm has been better than that of the conventional fluid mount.
NASA Astrophysics Data System (ADS)
Biondi, Filippo; Sarri, Antonio; Fiori, Luca; Dell'Omodarme, Kevin
2014-10-01
SAR Tomography is the extension of the conventional interferometric radar signal processing, extended in the height dimension. In order to improve the vertical resolution with respect to the classical Fourier methods, high resolution approaches, based on the Convex Optimization (CVX), has been implemented. This methods recast in the Compressed Sensing (CS) framework that optimize tomographic smooth profiles via atomic decomposition, in order to obtain sparsity. The optimum solution has been estimated by Interior Point Methods (IPM). The problem for such kind of signal processing is that the tomographic phase information may be suppressed and only the optimized energy information is available. In this paper we propose a method in order to estimate an optimized spectra and phase information projecting each vector components of each tomographic resolution cell spanned in the real and the imaginary component. The tomographic solutions has been performed by processing multi-baseline SAR datasets, in a full polarimetric mode, acquired by a portable small Continuous Wave (CW) radar in the X band.
Benchmarking derivative-free optimization algorithms.
More', J. J.; Wild, S. M.; Mathematics and Computer Science; Cornell Univ.
2009-01-01
We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.
Toward an FPGA architecture optimized for public-key algorithms
NASA Astrophysics Data System (ADS)
Elbirt, Adam J.; Paar, Christof
1999-08-01
Cryptographic algorithms are constantly evolving to meet security needs, and modular arithmetic is an integral part of these algorithms, especially in the case of public-key cryptosystems. To achieve optimal system performance while maintaining physical security, it is desirable to implement cryptographic algorithms in hardware. However, many public- key cryptographic algorithms require the implementation of modular arithmetic, specifically modular multiplication, for operands of 1024 bits in length. Additionally, algorithm agility is required to support algorithm independent protocols, a feature of most modern security protocols. Reprogrammability, particularly in-system reprogrammability, is critical in enabling the switching between cryptographic algorithms required for algorithm independent protocols. Field Programmable Gate Arrays (FPGAs) are a viable option for achieving this goal. Ideally, the targeted FPGA will have been designed with the architectural requirements for wide-operand modular arithmetic in mind in an effort to maximize system performance. This contribution investigates existing FPGA architectures with respect to modular multiplication. It also proposes a new FPGA architecture optimized for the wide-operand additions required for modular multiplication.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Multimodal optimization using a bi-objective evolutionary algorithm.
Deb, Kalyanmoy; Saha, Amit
2012-01-01
In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application. PMID:21591888
Application of coevolutionary genetic algorithms for multiobjective optimization
NASA Astrophysics Data System (ADS)
Liu, Jian-guo; Li, Zu-shu; Wu, Wei-ping
2007-12-01
Multiobjective optimization is clearly one of the most important classes of problems in science and engineering. The solution of real problem involved in multiobjective optimization must satisfy all optimization objectives simultaneously, and in general the solution is a set of indeterminacy points. The task of multiobjective optimization is to estimate the distribution of this solution set, then to find the satisfying solution in it. Many methods solving multiobjective optimization using genetic algorithm have been proposed in recent twenty years. But these approaches tend to work negatively, causing that the population converges to small number of solutions due to the random genetic drift. To avoid this phenomenon, a multiobjective coevolutionary genetic algorithm (MoCGA) for multiobjective optimization is proposed. The primary design goal of the proposed approach is to produce a reasonably good approximation of the true Pareto front of a problem. In the algorithms, each objective corresponds to a population. At each generation, these populations compete among themselves. An ecological population density competition equation is used for reference to describe the relation between multiple objectives and to direct the adjustment over the relation at individual and population levels. The proposed approach store the Pareto optimal point obtained along the evolutionary process into external set. The proposed approach is validated using Schaffer's test function f II and it is compared with the Niched Pareto GA (nPGA). Simulation experiments prove that the algorithm has a better performance in finding the Pareto solutions, and the MoCGA can have advantages over the other algorithms under consideration in convergence to the Pareto-optimal front.
Uniformly convex and strictly convex Orlicz spaces
NASA Astrophysics Data System (ADS)
Masta, Al Azhary
2016-02-01
In this paper we define the new norm of Orlicz spaces on â„n through a multiplication operator on an old Orlicz spaces. We obtain some necessary and sufficient conditions that the new norm to be a uniformly convex and strictly convex spaces.
Improved Clonal Selection Algorithm Combined with Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng
Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.
Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.
2006-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.
Optimization of computer-generated binary holograms using genetic algorithms
NASA Astrophysics Data System (ADS)
Cojoc, Dan; Alexandrescu, Adrian
1999-11-01
The aim of this paper is to compare genetic algorithms against direct point oriented coding in the design of binary phase Fourier holograms, computer generated. These are used as fan-out elements for free space optical interconnection. Genetic algorithms are optimization methods which model the natural process of genetic evolution. The configuration of the hologram is encoded to form a chromosome. To start the optimization, a population of different chromosomes randomly generated is considered. The chromosomes compete, mate and mutate until the best chromosome is obtained according to a cost function. After explaining the operators that are used by genetic algorithms, this paper presents two examples with 32 X 32 genes in a chromosome. The crossover type and the number of mutations are shown to be important factors which influence the convergence of the algorithm. GA is demonstrated to be a useful tool to design namely binary phase holograms of complicate structures.
Optimal Design of RF Energy Harvesting Device Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Mori, T.; Sato, Y.; Adriano, R.; Igarashi, H.
2015-11-01
This paper presents optimal design of an RF energy harvesting device using genetic algorithm (GA). In the present RF harvester, a planar spiral antenna (PSA) is loaded with matching and rectifying circuits. On the first stage of the optimal design, the shape parameters of PSA are optimized using . Then, the equivalent circuit of the optimized PSA is derived for optimization of the circuits. Finally, the parameters of RF energy harvesting circuit are optimized to maximize the output power using GA. It is shown that the present optimization increases the output power by a factor of five. The manufactured energy harvester starts working when the input electric field is greater than 0.5 V/m.
An algorithm for optimal structural design with frequency constraints
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Shaw, R. C. J.
1978-01-01
The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.
Study of genetic direct search algorithms for function optimization
NASA Technical Reports Server (NTRS)
Zeigler, B. P.
1974-01-01
The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.
Optimization restoration algorithm based on ant colony algorithm in WDM networks
NASA Astrophysics Data System (ADS)
Yang, Chunyong; Liu, Deming; Huang, Dexiu; Li, We
2005-02-01
Ant colony algorithm (ACA) is a novel simulated evolutionary algorithm. It is a population-based approach, which allows positive feedback to be used as the primary search mechanism and provides a new method for complicated combinatorial optimization problems. In this paper, it is used to optimize restoration routing for WDM optical networks. It is improved in three parts of selection strategy, local search, and information modification and it can process the problem of optimal restoration route search for different purpose in the cases of various failure conditions. Through the numerical results of practical networks: CHINANET, the practicability has been proved.
Genetic algorithm for multi-objective experimental optimization.
Link, Hannes; Weuster-Botz, Dirk
2006-12-01
A new software tool making use of a genetic algorithm for multi-objective experimental optimization (GAME.opt) was developed based on a strength Pareto evolutionary algorithm. The software deals with high dimensional variable spaces and unknown interactions of design variables. This approach was evaluated by means of multi-objective test problems replacing the experimental results. A default parameter setting is proposed enabling users without expert knowledge to minimize the experimental effort (small population sizes and few generations). PMID:17048033
A limited-memory algorithm for bound-constrained optimization
Byrd, R.H.; Peihuang, L.; Nocedal, J.
1996-03-01
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
Genetic algorithm for multi-objective experimental optimization
Link, Hannes
2006-01-01
A new software tool making use of a genetic algorithm for multi-objective experimental optimization (GAME.opt) was developed based on a strength Pareto evolutionary algorithm. The software deals with high dimensional variable spaces and unknown interactions of design variables. This approach was evaluated by means of multi-objective test problems replacing the experimental results. A default parameter setting is proposed enabling users without expert knowledge to minimize the experimental effort (small population sizes and few generations). PMID:17048033
Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence
Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.
2000-01-19
This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm
2014-01-01
The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848
Shape optimization of rubber bushing using differential evolution algorithm.
Kaya, Necmettin
2014-01-01
The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848
An algorithm for the empirical optimization of antenna arrays
NASA Technical Reports Server (NTRS)
Blank, S.
1983-01-01
A numerical technique is presented to optimize the performance of arbitrary antenna arrays under realistic conditions. An experimental-computational algorithm is formulated in which n-dimensional minimization methods are applied to measured data obtained from the antenna array. A numerical update formula is used to induce partial derivative information without requiring special perturbations of the array parameters. The algorithm provides a new design for the antenna array, and the method proceeds in an iterative fashion. Test case results are presented showing the effectiveness of the algorithm.
Optimal classification of standoff bioaerosol measurements using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar
2011-05-01
Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.
Optimization Algorithm for the Generation of ONCV Pseudopotentials
NASA Astrophysics Data System (ADS)
Schlipf, Martin; Gygi, Francois
2015-03-01
We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z=83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials provide a good agreement with the all-electron results obtained using the FLEUR code with a plane-wave energy cutoff of approximately 60 Ry. Supported by DOE/BES Grant DE-SC0008938.
Wavelet phase estimation using ant colony optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Shangxu; Yuan, Sanyi; Ma, Ming; Zhang, Rui; Luo, Chunmei
2015-11-01
Eliminating seismic wavelet is important in seismic high-resolution processing. However, artifacts may arise in seismic interpretation when the wavelet phase is inaccurately estimated. Therefore, we propose a frequency-dependent wavelet phase estimation method based on the ant colony optimization (ACO) algorithm with global optimization capacity. The wavelet phase can be optimized with the ACO algorithm by fitting nearby-well seismic traces with well-log data. Our proposed method can rapidly produce a frequency-dependent wavelet phase and optimize the seismic-to-well tie, particularly for weak signals. Synthetic examples demonstrate the effectiveness of the proposed ACO-based wavelet phase estimation method, even in the presence of a colored noise. Real data example illustrates that seismic deconvolution using an optimum mixed-phase wavelet can provide more information than that using an optimum constant-phase wavelet.
Chaos time series prediction based on membrane optimization algorithms.
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (?, m) and least squares support vector machine (LS-SVM) (?, ?) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)
NASA Astrophysics Data System (ADS)
CantÃ³, J.; Curiel, S.; MartÃnez-GÃ³mez, E.
2009-07-01
Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Approximating convex Pareto surfaces in multiobjective radiotherapy planning
Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.
2006-09-15
Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing.
Optimal Design of Geodetic Network Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vajedian, Sanaz; Bagheri, Hosein
2010-05-01
A geodetic network is a network which is measured exactly by techniques of terrestrial surveying based on measurement of angles and distances and can control stability of dams, towers and their around lands and can monitor deformation of surfaces. The main goals of an optimal geodetic network design process include finding proper location of control station (First order Design) as well as proper weight of observations (second order observation) in a way that satisfy all the criteria considered for quality of the network with itself is evaluated by the network's accuracy, reliability (internal and external), sensitivity and cost. The first-order design problem, can be dealt with as a numeric optimization problem. In this designing finding unknown coordinates of network stations is an important issue. For finding these unknown values, network geodetic observations that are angle and distance measurements must be entered in an adjustment method. In this regard, using inverse problem algorithms is needed. Inverse problem algorithms are methods to find optimal solutions for given problems and include classical and evolutionary computations. The classical approaches are analytical methods and are useful in finding the optimum solution of a continuous and differentiable function. Least squares (LS) method is one of the classical techniques that derive estimates for stochastic variables and their distribution parameters from observed samples. The evolutionary algorithms are adaptive procedures of optimization and search that find solutions to problems inspired by the mechanisms of natural evolution. These methods generate new points in the search space by applying operators to current points and statistically moving toward more optimal places in the search space. Genetic algorithm (GA) is an evolutionary algorithm considered in this paper. This algorithm starts with definition of initial population, and then the operators of selection, replication and variation are applied to obtain the solution of problem. In this research, the first step is to design a geodetic network and do the observations of the distances and angles between network's stations. The second step is to use the optimization algorithms to estimate unknown values of stations' coordinates, with regards to calculation equations of length and angle. The result indicates that The Genetic algorithms have been successfully employed for solving inverse problems in engineering disciplines. And it seems that many complex problems can be better solved using genetic algorithms than those of using conventional methods.
Hybrid Global Optimization Algorithms for Protein Structure Prediction: Alternating Hybrids
Klepeis, J. L.; Pieja, M. J.; Floudas, C. A.
2003-01-01
Hybrid global optimization methods attempt to combine the beneficial features of two or more algorithms, and can be powerful methods for solving challenging nonconvex optimization problems. In this paper, novel classes of hybrid global optimization methods, termed alternating hybrids, are introduced for application as a tool in treating the peptide and protein structure prediction problems. In particular, these new optimization methods take the form of hybrids between a deterministic global optimization algorithm, the Î±BB, and a stochastically based method, conformational space annealing (CSA). The Î±BB method, as a theoretically proven global optimization approach, exhibits consistency, as it guarantees convergence to the global minimum for twice-continuously differentiable constrained nonlinear programming problems, but can benefit from computationally related enhancements. On the other hand, the independent CSA algorithm is highly efficient, though the method lacks theoretical guarantees of convergence. Furthermore, both the Î±BB method and the CSA method are found to identify ensembles of low-energy conformers, an important feature for determining the true free energy minimum of the system. The proposed hybrid methods combine the desirable features of efficiency and consistency, thus enabling the accurate prediction of the structures of larger peptides. Computational studies for met-enkephalin and melittin, employing sequential and parallel computing frameworks, demonstrate the promise for these proposed hybrid methods. PMID:12547770
A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm
NASA Astrophysics Data System (ADS)
Mohanty, Prases K.; Parhi, Dayal R.
2014-12-01
Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
Environmental Optimization Using the WAste Reduction Algorithm (WAR)
Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...
Hybrid methods using genetic algorithms for global optimization.
Renders, J M; Flasse, S P
1996-01-01
This paper discusses the trade-off between accuracy, reliability and computing time in global optimization. Particular compromises provided by traditional methods (Quasi-Newton and Nelder-Mead's simplex methods) and genetic algorithms are addressed and illustrated by a particular application in the field of nonlinear system identification. Subsequently, new hybrid methods are designed, combining principles from genetic algorithms and "hill-climbing" methods in order to find a better compromise to the trade-off. Inspired by biology and especially by the manner in which living beings adapt themselves to their environment, these hybrid methods involve two interwoven levels of optimization, namely evolution (genetic algorithms) and individual learning (Quasi-Newton), which cooperate in a global process of optimization. One of these hybrid methods appears to join the group of state-of-the-art global optimization methods: it combines the reliability properties of the genetic algorithms with the accuracy of Quasi-Newton method, while requiring a computation time only slightly higher than the latter. PMID:18263027
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Optimal brushless DC motor design using genetic algorithms
NASA Astrophysics Data System (ADS)
Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.
2010-11-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.
Optimization algorithms for large-scale multireservoir hydropower systems
Hiew, K.L.
1987-01-01
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another. The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.
Sensitivity Analysis and Optimization Algorithm-Based on Nonlinear Programming-
NASA Astrophysics Data System (ADS)
Oda, Masayoshi; Yamagami, Yoshihiro; Kawata, Junji; Nishio, Yoshifumi; Ushida, Akio
We propose here a fully Spice-oriented design algorithm of op-amps for attaining the maximum gains under low power consumptions and assigned slew-rates. Our optimization algorithm is based on a well-known steepest descent method combining with nonlinear programming. The algorithm is realized by equivalent RC circuits with ABMs (analog behavior models) of Spice. The gradient direction is decided by the analysis of sensitivity circuits. The optimum parameters can be found at the equilibrium point in the transient response of the RC circuit. Although the optimization time is much faster than the other design tools, the results might be rough because of the simple transistor models. If much better parameter values are required, they can be improved with Spice simulator and/or other tools.
RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
A genetic algorithm approach in interface and surface structure optimization
Zhang, Jian
2010-05-16
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.
An improved particle swarm optimization algorithm for reliability problems.
Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven
2011-01-01
An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature. PMID:20850737
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.
OPTIMIZE-M. Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Loehle, C.
1997-07-01
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.
Modeling IrisCode and its variants as convex polyhedral cones and its security implications.
Kong, Adams Wai-Kin
2013-03-01
IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods. PMID:23193454
Global structual optimizations of surface systems with a genetic algorithm
Chuang, Feng-Chuan
2005-05-01
Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al{sub n} (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of {radical}3 x {radical}3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.
Penalty adapting ant algorithm: application to pipe network optimization
NASA Astrophysics Data System (ADS)
Afshar, M. H.
2008-10-01
A penalty adapting ant algorithm is presented in an attempt to eliminate the dependency of ant algorithms on the penalty parameter used for the solution of constrained optimization problems. The method uses an adapting mechanism for determination of the penalty parameter leading to elimination of the costly process of penalty parameter tuning. The method is devised on the basis of observation that for large penalty parameters, infeasible solutions will have a higher total cost than feasible solutions and vice versa. The method therefore uses the best feasible and infeasible solution costs of the iteration to adaptively adjust the penalty parameter to be used in the next iteration. The pheromone updating procedure of the max-min ant system is also modified to keep ants on and around the boundary of the feasible search space where quality solutions can be found. The sensitivity of the proposed method to the initial value of the penalty parameter is investigated and indicates that the method converges to optimal or near-optimal solutions irrespective of the initial starting value of the penalty parameter. This is significant as it eliminates the need for sensitivity analysis of the method with respect to the penalty factor, thus adding to the computational efficiency of ant algorithms. Furthermore, it is shown that the success rate of the search algorithm in locating an optimal solution is increased when a self-adapting mechanism is used. The presented method is applied to a benchmark pipe network optimization problem in the literature and the results are presented and compared with those of existing algorithms.
Research on Optimization of Encoding Algorithm of PDF417 Barcodes
NASA Astrophysics Data System (ADS)
Sun, Ming; Fu, Longsheng; Han, Shuqing
The purpose of this research is to develop software to optimize the data compression of a PDF417 barcode using VC++6.0. According to the different compression mode and the particularities of Chinese, the relevant approaches which optimize the encoding algorithm of data compression such as spillage and the Chinese characters encoding are proposed, a simple approach to compute complex polynomial is introduced. After the whole data compression is finished, the number of the codeword is reduced and then the encoding algorithm is optimized. The developed encoding system of PDF 417 barcodes will be applied in the logistics management of fruits, therefore also will promote the fast development of the two-dimensional bar codes.
Application of chaos optimization algorithm in the micro spectrometer
NASA Astrophysics Data System (ADS)
Xiong, Yu Hong; Xu, Shao Ping; Lv, Xiao Lan; Jiang, Shun Liang; Ye, Fa Mao; Zhou, Shi Lin
2010-10-01
In the analysis of optical spectrum, it is an effective means to construct the model of analysis and calibration by selecting proper wavelength data points, which can overcome the influences negatively produced by such factors as instruments, personnel and impurity in the measure of substances, as well as to improve the analytical precision of micro spectrometer system. This is particularly the case with multiple components. The reciprocal effect between multiple components appears to be more necessary than ever in involving in the selecting the wavelength data points in the construction of the model. The paper discusses the application of chaos optimization algorithm in the spectral wavelength selection on the basis of an overview of the basic theory of chaos optimization algorithm and brings forward a method of wavelength selection based on parallel binary chaos optimization. In the end, this method is illustrated with examples by adopting the computer simulation.
Optimization of circuits using a constructive learning algorithm
Beiu, V.
1997-05-01
The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
STP: A Stochastic Tunneling Algorithm for Global Optimization
Oblow, E.M.
1999-05-20
A stochastic approach to solving continuous function global optimization problems is presented. It builds on the tunneling approach to deterministic optimization presented by Barhen et al, by combining a series of local descents with stochastic searches. The method uses a rejection-based stochastic procedure to locate new local minima descent regions and a fixed Lipschitz-like constant to reject unpromising regions in the search space, thereby increasing the efficiency of the tunneling process. The algorithm is easily implemented in low-dimensional problems and scales easily to large problems. It is less effective without further heuristics in these latter cases, however. Several improvements to the basic algorithm which make use of approximate estimates of the algorithms parameters for implementation in high-dimensional problems are also discussed. Benchmark results are presented, which show that the algorithm is competitive with the best previously reported global optimization techniques. A successful application of the approach to a large-scale seismology problem of substantial computational complexity using a low-dimensional approximation scheme is also reported.
Optimization in Quaternion Dynamic Systems: Gradient, Hessian, and Learning Algorithms.
Xu, Dongpo; Xia, Yili; Mandic, Danilo P
2016-02-01
The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications. Solutions typically require the calculation of the gradient and Hessian. However, real functions of quaternion variables are essentially nonanalytic, which are prohibitive to the development of quaternion-valued learning systems. To address this issue, we propose new definitions of quaternion gradient and Hessian, based on the novel generalized Hamilton-real (GHR) calculus, thus making a possible efficient derivation of general optimization algorithms directly in the quaternion field, rather than using the isomorphism with the real domain, as is current practice. In addition, unlike the existing quaternion gradients, the GHR calculus allows for the product and chain rule, and for a one-to-one correspondence of the novel quaternion gradient and Hessian with their real counterparts. Properties of the quaternion gradient and Hessian relevant to numerical applications are also introduced, opening a new avenue of research in quaternion optimization and greatly simplified the derivations of learning algorithms. The proposed GHR calculus is shown to yield the same generic algorithm forms as the corresponding real- and complex-valued algorithms. Advantages of the proposed framework are illuminated over illustrative simulations in quaternion signal processing and neural networks. PMID:26087504
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Garro, Beatriz A.; VÃ¡zquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Optimization of warfarin dose by population-specific pharmacogenomic algorithm.
Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K
2012-08-01
To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose. PMID:21358752
Nonconvex compressed sensing by nature-inspired optimization algorithms.
Liu, Fang; Lin, Leping; Jiao, Licheng; Li, Lingling; Yang, Shuyuan; Hou, Biao; Ma, Hongmei; Yang, Li; Xu, Jinghuan
2015-05-01
The l 0 regularized problem in compressed sensing reconstruction is nonconvex with NP-hard computational complexity. Methods available for such problems fall into one of two types: greedy pursuit methods and thresholding methods, which are characterized by suboptimal fast search strategies. Nature-inspired algorithms for combinatorial optimization are famous for their efficient global search strategies and superior performance for nonconvex and nonlinear problems. In this paper, we study and propose nonconvex compressed sensing for natural images by nature-inspired optimization algorithms. We get measurements by the block-based compressed sampling and introduce an overcomplete dictionary of Ridgelet for image blocks. An atom of this dictionary is identified by the parameters of direction, scale and shift. Of them, direction parameter is important for adapting to directional regularity. So we propose a two-stage reconstruction scheme (TS_RS) of nature-inspired optimization algorithms. In the first reconstruction stage, we design a genetic algorithm for a class of image blocks to acquire the estimation of atomic combinations in all directions; and in the second reconstruction stage, we adopt clonal selection algorithm to search better atomic combinations in the sub-dictionary resulted by the first stage for each image block further on scale and shift parameters. In TS_RS, to reduce the uncertainty and instability of the reconstruction problems, we adopt novel and flexible heuristic searching strategies, which include delicately designing the initialization, operators, evaluating methods, and so on. The experimental results show the efficiency and stability of the proposed TS_RS of nature-inspired algorithms, which outperforms classic greedy and thresholding methods. PMID:25148677
Optimization of an antenna array using genetic algorithms
Kiehbadroudinezhad, Shahideh; Noordin, Nor Kamariah; Sali, A.; Abidin, Zamri Zainal
2014-06-01
An array of antennas is usually used in long distance communication. The observation of celestial objects necessitates a large array of antennas, such as the Giant Metrewave Radio Telescope (GMRT). Optimizing this kind of array is very important when observing a high performance system. The genetic algorithm (GA) is an optimization solution for these kinds of problems that reconfigures the position of antennas to increase the u-v coverage plane or decrease the sidelobe levels (SLLs). This paper presents how to optimize a correlator antenna array using the GA. A brief explanation about the GA and operators used in this paper (mutation and crossover) is provided. Then, the results of optimization are discussed. The results show that the GA provides efficient and optimum solutions among a pool of candidate solutions in order to achieve the desired array performance for the purposes of radio astronomy. The proposed algorithm is able to distribute the u-v plane more efficiently than GMRT with a more than 95% distribution ratio at snapshot, and to fill the u-v plane from a 20% to more than 68% filling ratio as the number of generations increases in the hour tracking observations. Finally, the algorithm is able to reduce the SLL to â€“21.75 dB.
Using Heuristic Algorithms to Optimize Observing Target Sequences
NASA Astrophysics Data System (ADS)
Sosnowska, D.; Ouadahi, A.; Buchschacher, N.; Weber, L.; Pepe, F.
2014-05-01
The preparation of observations is normally carried out at the telescope by the visiting observer. In order to help the observer, we propose several algorithms to automatically optimize the sequence of targets. The optimization consists of assuring that all the chosen targets are observable within the given time interval, and to find their best execution order in terms of the observation quality and the shortest telescope displacement time. Since an exhaustive search is too expensive in time, we researched heuristic algorithms, specifically: Min-Conflict, Non-Sorting Genetic Algorithms and Simulated Annealing. Multiple metaheuristics are used in parallel to swiftly give an approximation of the best solution, with all the constraints satisfied and the total execution time minimized. The optimization process has a duration on the order of tens of seconds, allowing for quick re-adaptation in case of changing atmospheric conditions. The graphical user interface allows the user to control the parameters of the optimization process. Therefore, the search can be adjusted in real time. The module was coded in a way to allow easily the addition of new constraints, and thus ensure its compatibility with different instruments. For now, the application runs as a plug-in to the observation preparation tool called New Short Term Scheduler, which is used on three spectrographs dedicated to the exoplanets search: HARPS at the La Silla observatory, HARPS North at the La Palma observatory and SOPHIE at the Observatoire de Haute-Provence.
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance-seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128-engined F-15; this algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption, (2) minimum fan-turbine inlet temperature (FTIT), and (3) maximum thrust. The flight test results have verified a thrust-specific fuel consumption reduction of 1 percent, up to 100 R decreases in FTIT, and increases of as much as 12 percent in maximum thrust. PSC technology promises to be of value in next-generation tactical and transport aircraft.
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
Optimizing the controllability of arbitrary networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Li, Xin-Feng; Lu, Zhe-Ming
2016-04-01
Recently, as the controllability of complex networks attracts much attention, how to optimize networks' controllability has become a common and urgent problem. In this paper, we develop an efficient genetic algorithm oriented optimization tool to optimize the controllability of arbitrary networks consisting of both state nodes and control nodes under Popov-Belevitch-Hautus rank condition. The experimental results on a number of benchmark networks show the effectiveness of this method and the evolution of network topology is captured. Furthermore, we explore how network structure affects its controllability and find that the sparser a network is, the more control nodes are needed to control it and the larger the differences between node degrees, the more control nodes are needed to achieve the full control. Our framework provides an alternative to controllability optimization and can be applied to arbitrary networks without any limitations.
Fuel management optimization using genetic algorithms and code independence
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of better solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.
Parallel evolutionary algorithms for optimization problems in aerospace engineering
NASA Astrophysics Data System (ADS)
Wang, J. F.; Periaux, J.; Sefrioui, M.
2002-12-01
This paper presents the recent developments in hierarchical genetic algorithms (HGAs) to speed up the optimization of aerodynamic shapes. It first introduces HGAs, a particular instance of parallel GAs based on the notion of interconnected sub-populations evolving independently. Previous studies have shown the advantages of introducing a multi-layered hierarchical topology in parallel GAs. Such a topology allows the use of multiple models for optimization problems, and shows that it is possible to mix fast low-fidelity models for exploration and expensive high-fidelity models for exploitation. Finally, a new class of multi-objective optimizers mixing HGAs and Nash Game Theory is defined. These methods are tested for solving design optimization problems in aerodynamics. A parallel version of this approach running a cluster of PCs demonstrate the convergence speed up on an inverse nozzle problem and a high-lift problem for a multiple element airfoil.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.
Reaction Path Optimization without NEB Springs or Interpolation Algorithms.
Plessow, P
2013-03-12
This letter describes a chain-of-states method that optimizes reaction paths under the sole constraint of equally spaced structures. In contrast to NEB and string methods, it requires no spring forces, interpolation algorithms, or other heuristics to control structure distribution. Rigorous use of a quadratic PES allows calculation of an optimization step with a predefined distribution in Cartesian space. The method is a formal extension of single-structure quasi-Newton methods. An initial guess can be evolved, as in the growing string method. PMID:26587592
Designing a competent simple genetic algorithm for search and optimization
NASA Astrophysics Data System (ADS)
Reed, Patrick; Minsker, Barbara; Goldberg, David E.
2000-12-01
Simple genetic algorithms have been used to solve many water resources problems, but specifying the parameters that control how adaptive search is performed can be a difficult and time-consuming trial-and-error process. However, theoretical relationships for population sizing and timescale analysis have been developed that can provide pragmatic tools for vastly limiting the number of parameter combinations that must be considered. The purpose of this technical note is to summarize these relationships for the water resources community and to illustrate their practical utility in a long-term groundwater monitoring design application. These relationships, which model the effects of the primary operators of a simple genetic algorithm (selection, recombination, and mutation), provide a highly efficient method for ensuring convergence to near-optimal or optimal solutions. Application of the method to a monitoring design test case identified robust parameter values using only three trial runs.
Optimization of 2D median filtering algorithm for VLIW architecture
NASA Astrophysics Data System (ADS)
Choo, Chang Y.; Tang, Ming
1999-12-01
Recently, several commercial DSP processors with VLIW (Very Long Instruction Word) architecture were introduced. The VLIW architectures offer high performance over a wide range of multimedia applications that require parallel processing. In this paper, we implement an efficient 2D median filter for VLIW architecture, particularly for Texas Instrument C62x VLIW architecture. Median filter is widely used for filtering the impulse noise while preserving edges in still images and video. The efficient median filtering requires fast sorting. The sorting algorithms were optimized using software pipelining and loop unrolling to maximize the use of the available functional units while meeting the data dependency constraints. The paper describes and lists the optimized source code for the 3 X 3 median filter using an enhanced selection sort algorithm.
Parallel Algorithms for Graph Optimization using Tree Decompositions
Weerapurage, Dinesh P; Sullivan, Blair D; Groer, Christopher S
2013-01-01
Although many NP-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of required dynamic programming tables and excessive running times of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree-decomposition based approach to solve maximum weighted independent set. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Quantum algorithm for molecular properties and geometry optimization.
Kassal, Ivan; Aspuru-Guzik, Alán
2009-12-14
Quantum computers, if available, could substantially accelerate quantum simulations. We extend this result to show that the computation of molecular properties (energy derivatives) could also be sped up using quantum computers. We provide a quantum algorithm for the numerical evaluation of molecular properties, whose time cost is a constant multiple of the time needed to compute the molecular energy, regardless of the size of the system. Molecular properties computed with the proposed approach could also be used for the optimization of molecular geometries or other properties. For that purpose, we discuss the benefits of quantum techniques for Newton's method and Householder methods. Finally, global minima for the proposed optimizations can be found using the quantum basin hopper algorithm, which offers an additional quadratic reduction in cost over classical multi-start techniques. PMID:20001019
Genetic Algorithm Application in Optimization of Wireless Sensor Networks
Norouzi, Ali; Zaim, A. Halim
2014-01-01
There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235
Parallel Algorithms for Graph Optimization using Tree Decompositions
Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S
2012-06-01
Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. PMID:25779670
Optimizing phase-estimation algorithms for diamond spin magnetometry
NASA Astrophysics Data System (ADS)
Nusran, N. M.; Dutt, M. V. Gurudev
2014-07-01
We present a detailed theoretical and numerical study discussing the application and optimization of phase-estimation algorithms (PEAs) to diamond spin magnetometry. We compare standard Ramsey magnetometry, the nonadaptive PEA (NAPEA), and quantum PEA (QPEA) incorporating error checking. Our results show that the NAPEA requires lower measurement fidelity, has better dynamic range, and greater consistency in sensitivity. We elucidate the importance of dynamic range to Ramsey magnetic imaging with diamond spins, and introduce the application of PEAs to time-dependent magnetometry.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework
Alicia Hofler, Pavel Evtushenko, Frank Marhauser
2009-09-01
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
Threshold matrix for digital halftoning by genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero
1998-10-01
Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
Fast branch & bound algorithms for optimal feature selection.
Somol, Petr; Pudil, Pavel; Kittler, Josef
2004-07-01
A novel search principle for optimal feature subset selection using the Branch & Bound method is introduced. Thanks to a simple mechanism for predicting criterion values, a considerable amount of time can be saved by avoiding many slow criterion evaluations. We propose two implementations of the proposed prediction mechanism that are suitable for use with nonrecursive and recursive criterion forms, respectively. Both algorithms find the optimum usually several times faster than any other known Branch & Bound algorithm. As the algorithm computational efficiency is crucial, due to the exponential nature of the search problem, we also investigate other factors that affect the search performance of all Branch & Bound algorithms. Using a set of synthetic criteria, we show that the speed of the Branch & Bound algorithms strongly depends on the diversity among features, feature stability with respect to different subsets, and criterion function dependence on feature set size. We identify the scenarios where the search is accelerated the most dramatically (finish in linear time), as well as the worst conditions. We verify our conclusions experimentally on three real data sets using traditional probabilistic distance criteria. PMID:18579948
A novel Retinex algorithm based on alternating direction optimization
NASA Astrophysics Data System (ADS)
Fu, Xueyang; Lin, Qin; Guo, Wei; Huang, Yue; Zeng, Delu; Ding, Xinghao
2013-10-01
The goal of the Retinex theory is to removed the effects of illumination from the observed images. To address this typical ill-posed inverse problem, many existing Retinex algorithms obtain an enhanced image by using different assumptions either on the illumination or on the reflectance. One significant limitation of these Retinex algorithms is that if the assumption is false, the result is unsatisfactory. In this paper, we firstly build a Retinex model which includes two variables: the illumination and the reflectance. We propose an efficient and effective algorithm based on alternating direction optimization to solve this problem where FFT (Fast Fourier Transform) is used to speed up the computation. Comparing with most existing Retinex algorithms, the proposed method solve the illumination image and reflectance image without converting images to the logarithmic domain. One of the advantages in this paper is that, unlike other traditional Retinex algorithms, our method can simultaneously estimate the illumination image and the reflectance image, the later of which is the ideal image without the illumination effect. Since our method can directly separate the illumination and the reflectance, and the two variables constrain each other mutually in the computing process, the result is robust to some degree. Another advantage is that our method has less computational cost and can be applied to real-time processing.
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Sve?ko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (Ï„, m) and least squares support vector machine (LS-SVM) (Î³, Ïƒ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
A Convex Geometry-Based Blind Source Separation Method for Separating Nonnegative Sources.
Yang, Zuyuan; Xiang, Yong; Rong, Yue; Xie, Kan
2015-08-01
This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method. PMID:25203999
A heterogeneous algorithm for PDT dose optimization for prostate
NASA Astrophysics Data System (ADS)
Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S. B.; Hahn, Stephen M.
2009-02-01
The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT.
Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems
O'Leary, Dianne P.; Tits, Andre
2014-04-03
Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.
ERIC Educational Resources Information Center
Hodge, Jonathan K.; Marshall, Emily; Patterson, Geoff
2010-01-01
Convexity-based measures of shape compactness provide an effective way to identify irregularities in congressional district boundaries. A low convexity coefficient may suggest that a district has been gerrymandered, or it may simply reflect irregularities in the corresponding state boundary. Furthermore, the distribution of population within a…
An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel
NASA Astrophysics Data System (ADS)
Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.
2015-01-01
This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.
Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm
NASA Technical Reports Server (NTRS)
Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.
2001-01-01
Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.
Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas
2010-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1
Algorithms for optimally setting Wisdom Sense threshold parameters
Richards, W.; Helman, P. . Dept. of Computer Science)
1993-01-01
Wisdom Sense is an anomaly detection system developed and implemented at Los Alamos National Laboratory. In this report we present several algorithms for addressing threshold setting problems in W S. We consider three different versions of the problems and propose several solutions for each. Our main result is an O(number of anomalies) algorithm for finding an optimal two-dimensional threshold setting, that is, an optimal pair (T[sub l],T[sub 2]) such that a transaction is flagged if its score vector's maximum component is at least T[sub 1] or if its inner product with a weight vector exceeds T[sub 2]. The present also simpler solutions for both this and one-dimensional versions of the problem, as well as an approximation algorithm that can be used on extremely large problem instances. Future work will present heuristics for a k-dimensional version of the threshold setting problem, a problem which we have demonstrated is NP-hard.
Algorithms for optimally setting Wisdom & Sense threshold parameters
Richards, W.; Helman, P.
1993-03-01
Wisdom & Sense is an anomaly detection system developed and implemented at Los Alamos National Laboratory. In this report we present several algorithms for addressing threshold setting problems in W&S. We consider three different versions of the problems and propose several solutions for each. Our main result is an O(number of anomalies) algorithm for finding an optimal two-dimensional threshold setting, that is, an optimal pair (T{sub l},T{sub 2}) such that a transaction is flagged if its score vector`s maximum component is at least T{sub 1} or if its inner product with a weight vector exceeds T{sub 2}. The present also simpler solutions for both this and one-dimensional versions of the problem, as well as an approximation algorithm that can be used on extremely large problem instances. Future work will present heuristics for a k-dimensional version of the threshold setting problem, a problem which we have demonstrated is NP-hard.
Optimal design of link systems using successive zooming genetic algorithm
NASA Astrophysics Data System (ADS)
Kwon, Young-Doo; Sohn, Chang-hyun; Kwon, Soon-Bum; Lim, Jae-gyoo
2009-07-01
Link-systems have been around for a long time and are still used to control motion in diverse applications such as automobiles, robots and industrial machinery. This study presents a procedure involving the use of a genetic algorithm for the optimal design of single four-bar link systems and a double four-bar link system used in diesel engine. We adopted the Successive Zooming Genetic Algorithm (SZGA), which has one of the most rapid convergence rates among global search algorithms. The results are verified by experiment and the Recurdyn dynamic motion analysis package. During the optimal design of single four-bar link systems, we found in the case of identical input/output (IO) angles that the initial and final configurations show certain symmetry. For the double link system, we introduced weighting factors for the multi-objective functions, which minimize the difference between output angles, providing balanced engine performance, as well as the difference between final output angle and the desired magnitudes of final output angle. We adopted a graphical method to select a proper ratio between the weighting factors.
Library design using genetic algorithms for catalyst discovery and optimization
NASA Astrophysics Data System (ADS)
Clerc, Frederic; Lengliz, Mourad; Farrusseng, David; Mirodatos, Claude; Pereira, Sílvia R. M.; Rakotomalala, Ricco
2005-06-01
This study reports a detailed investigation of catalyst library design by genetic algorithm (GA). A methodology for assessing GA configurations is described. Operators, which promote the optimization speed while being robust to noise and outliers, are revealed through statistical studies. The genetic algorithms were implemented in GA platform software called OptiCat, which enables the construction of custom-made workflows using a tool box of operators. Two separate studies were carried out (i) on a virtual benchmark and (ii) on real surface response which is derived from HT screening. Additionally, we report a methodology to model a complex surface response by binning the search space in small zones that are then independently modeled by linear regression. In contrast to artificial neural networks, this approach allows one to obtain an explicit model in an analogical form that can be further used in Excel or entered in OptiCat to perform simulations. While speeding the implementation of a hybrid algorithm combining a GA with a knowledge-based extraction engine is described, while speeding up the optimization process by means of virtual prescreening the hybrid GA enables one to open the "black-box" by providing knowledge as a set of association rules.
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
ABCluster: the artificial bee colony algorithm for cluster global optimization.
Zhang, Jun; Dolg, Michael
2015-10-01
Global optimization of cluster geometries is of fundamental importance in chemistry and an interesting problem in applied mathematics. In this work, we introduce a relatively new swarm intelligence algorithm, i.e. the artificial bee colony (ABC) algorithm proposed in 2005, to this field. It is inspired by the foraging behavior of a bee colony, and only three parameters are needed to control it. We applied it to several potential functions of quite different nature, i.e., the Coulomb-Born-Mayer, Lennard-Jones, Morse, Z and Gupta potentials. The benchmarks reveal that for long-ranged potentials the ABC algorithm is very efficient in locating the global minimum, while for short-ranged ones it is sometimes trapped into a local minimum funnel on a potential energy surface of large clusters. We have released an efficient, user-friendly, and free program "ABCluster" to realize the ABC algorithm. It is a black-box program for non-experts as well as experts and might become a useful tool for chemists to study clusters. PMID:26327507
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Constrained genetic algorithms for optimizing multi-use reservoir operation
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu; Chang, Fi-John; Wang, Kuo-Wei; Dai, Shin-Yi
2010-08-01
To derive an optimal strategy for reservoir operations to assist the decision-making process, we propose a methodology that incorporates the constrained genetic algorithm (CGA) where the ecological base flow requirements are considered as constraints to water release of reservoir operation when optimizing the 10-day reservoir storage. Furthermore, a number of penalty functions designed for different types of constraints are integrated into reservoir operational objectives to form the fitness function. To validate the applicability of this proposed methodology for reservoir operations, the Shih-Men Reservoir and its downstream water demands are used as a case study. By implementing the proposed CGA in optimizing the operational performance of the Shih-Men Reservoir for the last 20 years, we find this method provides much better performance in terms of a small generalized shortage index (GSI) for human water demands and greater ecological base flows for most of the years than historical operations do. We demonstrate the CGA approach can significantly improve the efficiency and effectiveness of water supply capability to both human and ecological base flow requirements and thus optimize reservoir operations for multiple water users. The CGA can be a powerful tool in searching for the optimal strategy for multi-use reservoir operations in water resources management.
Genetic algorithm optimized triply compensated pulses in NMR spectroscopy
NASA Astrophysics Data System (ADS)
Manu, V. S.; Veglia, Gianluigi
2015-11-01
Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed ? and ? / 2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-13C, 15N NAVL peptide as well as U-13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.
Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization
HART,WILLIAM E.
2000-06-01
The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.
A partially inexact bundle method for convex semi-infinite minmax problems
NASA Astrophysics Data System (ADS)
Fuduli, Antonio; Gaudioso, Manlio; Giallombardo, Giovanni; Miglionico, Giovanna
2015-04-01
We present a bundle method for solving convex semi-infinite minmax problems which allows inexact solution of the inner maximization. The method is of the partially inexact oracle type, and it is aimed at reducing the occurrence of null steps and at improving bundle handling with respect to existing methods. Termination of the algorithm is proved at a point satisfying an approximate optimality criterion, and the results of some numerical experiments are also reported.
Lens design and optimization using multi-objective evolutionary algorithms
NASA Astrophysics Data System (ADS)
Joseph, Shaine
Non-Dominated Sorting Genetic Algorithm 2 (NSGA 2) was used to optimize optical systems with multiple objectives. The systems selected for study are Cooke triplets, Petzval lens and double Gauss lens. The objectives are minimization of aberration coefficients for spherical aberration, distortion, and the sum of coefficients of all third order monochromatic aberrations. CODE V RTM was used as a ray tracer. A set of trade-off solutions representing the optima, known as Pareto-Optima in multi-objective analysis, was obtained. A comparison of obtained optima to the known optima was done. Pareto-Optima in objective space for the selected Petzval lens design problem are shown to exhibit saddle points having unique trade-off features, which can not be detected in traditional gradient-based scalar optimization. Various optimization strategies are illustrated which ensure a diverse set of Pareto-Optima offering alternate manufacturing choices. Based on the results, a fourth objective was identified (sum of lateral and axial color coefficients) that is necessary to make valid trade-off decisions. The expansion of objectives followed by re-optimization provided unique trade-off solutions. Based on power and symmetry distribution of the component elements for the Cooke triplet system, addition and deletion of elements were carried out. The fourth objective added for that study is the minimization of the required number of elements. For the double Gauss lens system, the Pareto optimal surface indicated alternate manufacturing choices. There is a clear diversity of the Pareto optimal front in both objective and decision vector space. These studies have clearly illustrated the advantages of evolutionary multi-objective optimization techniques in optical system design.
Robust Optimization Design Algorithm for High-Frequency TWTs
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
Athans, M. (editor); Willsky, A. S. (editor)
1982-01-01
The analysis and design of complex multivariable reliable control systems are considered. High performance and fault tolerant aircraft systems are the objectives. A preliminary feasibility study of the design of a lateral control system for a VTOL aircraft that is to land on a DD963 class destroyer under high sea state conditions is provided. Progress in the following areas is summarized: (1) VTOL control system design studies; (2) robust multivariable control system synthesis; (3) adaptive control systems; (4) failure detection algorithms; and (5) fault tolerant optimal control theory.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Population Induced Instabilities in Genetic Algorithms for Constrained Optimization
NASA Astrophysics Data System (ADS)
Vlachos, D. S.; Parousis-Orthodoxou, K. J.
2013-02-01
Evolutionary computation techniques, like genetic algorithms, have received a lot of attention as optimization techniques but, although they exhibit a very promising potential in curing the problem, they have not produced a significant breakthrough in the area of systematic treatment of constraints. There are two mainly ways of handling the constraints: the first is to produce an infeasibility measure and add it to the general cost function (the well known penalty methods) and the other is to modify the mutation and crossover operation in a way that they only produce feasible members. Both methods have their drawbacks and are strongly correlated to the problem that they are applied. In this work, we propose a different treatment of the constraints: we induce instabilities in the evolving population, in a way that infeasible solution cannot survive as they are. Preliminary results are presented in a set of well known from the literature constrained optimization problems.
NASA Astrophysics Data System (ADS)
Yavari, S.; Zoej, M. J. V.; Mokhtarzade, M.; Mohammadzadeh, A.
2012-07-01
Rational Function Models (RFM) are one of the most considerable approaches for spatial information extraction from satellite images especially where there is no access to the sensor parameters. As there is no physical meaning for the terms of RFM, in the conventional solution all the terms are involved in the computational process which causes over-parameterization errors. Thus in this paper, advanced optimization algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are investigated to determine the optimal terms of RFM. As the optimization would reduce the number of required RFM terms, the possibility of using fewer numbers of Ground Control Points (GCPs) in the solution comparing to the conventional method is inspected. The results proved that both GA and PSO are able to determine the optimal terms of RFM to achieve rather the same accuracy. However, PSO shows to be more effective from computational time part of view. The other important achievement is that the algorithms are able to solve the RFM using less GCPs with higher accuracy in comparison to conventional RFM.
Genetic Algorithm Optimization of Artificial Neural Networks for Hydrological Modelling
NASA Astrophysics Data System (ADS)
Abrahart, R. J.
2004-05-01
This paper will consider the case for genetic algorithm optimization in the development of an artificial neural network model. It will provide a methodological evaluation of reported investigations with respect to hydrological forecasting and prediction. The intention in such operations is to develop a superior modelling solution that will be: \\begin{itemize} more accurate in terms of output precision and model estimation skill; more tractable in terms of personal requirements and end-user control; and/or more robust in terms of conceptual and mechanical power with respect to adverse conditions. The genetic algorithm optimization toolbox could be used to perform a number of specific roles or purposes and it is the harmonious and supportive relationship between neural networks and genetic algorithms that will be highlighted and assessed. There are several neural network mechanisms and procedures that could be enhanced and potential benefits are possible at different stages in the design and construction of an operational hydrological model e.g. division of inputs; identification of structure; initialization of connection weights; calibration of connection weights; breeding operations between successful models; and output fusion associated with the development of ensemble solutions. Each set of opportunities will be discussed and evaluated. Two strategic questions will also be considered: [i] should optimization be conducted as a set of small individual procedures or as one large holistic operation; [ii] what specific function or set of weighted vectors should be optimized in a complex software product e.g. timings, volumes, or quintessential hydrological attributes related to the 'problem situation' - that might require the development flood forecasting, drought estimation, or record infilling applications. The paper will conclude with a consideration of hydrological forecasting solutions developed on the combined methodologies of co-operative co-evolution and operational specialization. The standard approach to neural-evolution is at the network level such that a population of working solutions is manipulated until the fittest member is found. SANE [Symbiotic Adaptive Neuro-Evolution]1 source code offers an alternative method based on co-operative co-evolution in which a population of hidden neurons is evolved. The task of each hidden neuron is to establish appropriate connections that will provide: [i] a functional solution and [ii] performance improvements. Each member of the population attempts to optimize one particular aspect of the overall modelling process and evolution can lead to several different forms of specialization. This method of adaptive evolution also facilitates the creation of symbiotic relationships in which individual members must co-operate with others - who must be present - to permit survival. 1http://www.cs.utexas.edu/users/nn/pages/software/abstracts.html#sane-c
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
D'Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Optimal sliding guidance algorithm for Mars powered descent phase
NASA Astrophysics Data System (ADS)
Wibben, Daniel R.; Furfaro, Roberto
2016-02-01
Landing on large planetary bodies (e.g. Mars) with pinpoint accuracy presents a set of new challenges that must be addressed. One such challenge is the development of new guidance algorithms that exhibit a higher degree of robustness and flexibility. In this paper, the Zero-Effort-Miss/Zero-Effort-Velocity (ZEM/ZEV) optimal sliding guidance (OSG) scheme is applied to the Mars powered descent phase. This guidance algorithm has been specifically designed to combine techniques from both optimal and sliding control theories to generate an acceleration command based purely on the current estimated spacecraft state and desired final target state. Consequently, OSG yields closed-loop trajectories that do not need a reference trajectory. The guidance algorithm has its roots in the generalized ZEM/ZEV feedback guidance and its mathematical equations are naturally derived by defining a non-linear sliding surface as a function of the terms Zero-Effort-Miss and Zero-Effort-Velocity. With the addition of the sliding mode and using Lyapunov theory for non-autonomous systems, one can formally prove that the developed OSG law is globally finite-time stable to unknown but bounded perturbations. Here, the focus is on comparing the generalized ZEM/ZEV feedback guidance with the OSG law to explicitly demonstrate the benefits of the sliding mode augmentation. Results show that the sliding guidance provides a more robust solution in off-nominal scenarios while providing similar fuel consumption when compared to the non-sliding guidance command. Further, a Monte Carlo analysis is performed to examine the performance of the OSG law under perturbed conditions.
Verified convex hull and distance computation for octree-encoded objects
NASA Astrophysics Data System (ADS)
Dyllong, Eva; Luther, Wolfram
2007-02-01
This paper discusses algorithms for computing verified convex hull and distance enclosure for objects represented by axis-aligned or unaligned octrees. To find a convex enclosure of an octree, the concept of extreme vertices of boxes on its boundary has been used. The convex hull of all extreme vertices yields an enclosure of the object. Thus, distance algorithms for convex polyhedra to obtain lower bounds for the distance between two octrees can be applied. Since using convex hulls makes it possible to avoid the unwanted wrapping effect that results from repeated decompositions, it also opens a way to dynamic distance algorithms for moving objects.
Parallel global optimization with the particle swarm algorithm.
Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D
2004-12-01
Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
Inner Random Restart Genetic Algorithm for Practical Delivery Schedule Optimization
NASA Astrophysics Data System (ADS)
Sakurai, Yoshitaka; Takada, Kouhei; Onoyama, Takashi; Tsukamoto, Natsuki; Tsuruta, Setsuo
A delivery route optimization that improves the efficiency of real time delivery or a distribution network requires solving several tens to hundreds but less than 2 thousands cities Traveling Salesman Problems (TSP) within interactive response time (less than about 3 second), with expert-level accuracy (less than about 3% of error rate). Further, to make things more difficult, the optimization is subjects to special requirements or preferences of each various delivery sites, persons, or societies. To meet these requirements, an Inner Random Restart Genetic Algorithm (Irr-GA) is proposed and developed. This method combines meta-heuristics such as random restart and GA having different types of simple heuristics. Such simple heuristics are 2-opt and NI (Nearest Insertion) methods, each applied for gene operations. The proposed method is hierarchical structured, integrating meta-heuristics and heuristics both of which are multiple but simple. This method is elaborated so that field experts as well as field engineers can easily understand to make the solution or method easily customized and extended according to customers' needs or taste. Comparison based on the experimental results and consideration proved that the method meets the above requirements more than other methods judging from not only optimality but also simplicity, flexibility, and expandability in order for this method to be practically used.
Integrated network design and scheduling problems : optimization algorithms and applications.
Nurre, Sarah G.; Carlson, Jeffrey J.
2014-01-01
We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.
Experimental optimization of protein refolding with a genetic algorithm
Anselment, Bernd; Baerend, Danae; Mey, Elisabeth; Buchner, Johannes; Weuster-Botz, Dirk; Haslbeck, Martin
2010-01-01
Refolding of proteins from solubilized inclusion bodies still represents a major challenge for many recombinantly expressed proteins and often constitutes a major bottleneck. As in vitro refolding is a complex reaction with a variety of critical parameters, suitable refolding conditions are typically derived empirically in extensive screening experiments. Here, we introduce a new strategy that combines screening and optimization of refolding yields with a genetic algorithm (GA). The experimental setup was designed to achieve a robust and universal method that should allow optimizing the folding of a variety of proteins with the same routine procedure guided by the GA. In the screen, we incorporated a large number of common refolding additives and conditions. Using this design, the refolding of four structurally and functionally different model proteins was optimized experimentally, achieving 74–100% refolding yield for all of them. Interestingly, our results show that this new strategy provides optimum conditions not only for refolding but also for the activity of the native enzyme. It is designed to be generally applicable and seems to be eligible for all enzymes. PMID:20799347
Algebraic and algorithmic frameworks for optimized quantum measurements
NASA Astrophysics Data System (ADS)
Laghaout, Amine; Andersen, Ulrik L.
2015-10-01
von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are, however, static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup specific. We shall here develop a unified framework for designing optimized measurements. Our approach is twofold: The first is algebraic and formulates the problem of measurement as a simple matrix diagonalization problem. The second is algorithmic and models the optimal interaction between measurement outcomes and measurement settings as a cascaded network of conditional probabilities. Finally, we demonstrate that several figures of merit, such as Bell factors, can be improved by optimized measurements. This leads us to the promising observation that measurement detectors whichâ€”taken individuallyâ€”have a low quantum efficiency can be arranged into circuits where, collectively, the limitations of inefficiency are compensated for.
A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis.
Raja, Chandrasekaran; Gangatharan, Narayanan
2015-08-01
Glaucoma is among the most common causes of permanent blindness in human. Because the initial symptoms are not evident, mass screening would assist early diagnosis in the vast population. Such mass screening requires an automated diagnosis technique. Our proposed automation consists of pre-processing, optimal wavelet transformation, feature extraction, and classification modules. The hyper analytic wavelet transformation (HWT) based statistical features are extracted from fundus images. Because HWT preserves phase information, it is appropriate for feature extraction. The features are then classified by a Support Vector Machine (SVM) with a radial basis function (RBF) kernel. The filter coefficients of the wavelet transformation process and the SVM-RB width parameter are simultaneously tailored to best-fit the diagnosis by the hybrid Particle Swarm algorithm. To overcome premature convergence, a Group Search Optimizer (GSO) random searching (ranging) and area scanning behavior (around the optima) are embedded within the Particle Swarm Optimization (PSO) framework. We also embed a novel potential-area scanning as a preventive mechanism against premature convergence, rather than diagnosis and cure. This embedding does not compromise the generality and utility of PSO. In two 10-fold cross-validated test runs, the diagnostic accuracy of the proposed hybrid PSO exceeded that of conventional PSO. Furthermore, the hybrid PSO maintained the ability to explore even at later iterations, ensuring maturity in fitness. PMID:26093787
The study of water supply network optimization based on the immune mechanism of ant colony algorithm
NASA Astrophysics Data System (ADS)
Wang, Zongjiang
2013-03-01
For ant colony algorithm to search for a long time, easy to occur Stagnant phenomenon and sink into local most superior defects, and puts forward a kind of immune mechanism of the ant colony algorithm, and the algorithm is applied to typical water supply network in the combinatorial optimization problem. Combined with water supply network for example problem using respectively based on immune mechanisms of the ant colony algorithm and genetic algorithm and the basic ant colony algorithm for example to analysis. The results show that the improved algorithm is easier to realize the global optimal solution, and high efficiency, the optimization algorithm is better than other traditional optimization for solving water distribution network ability. So demonstrate the effectiveness of the algorithm.
Computational and statistical tradeoffs via convex relaxation
Chandrasekaran, Venkat; Jordan, Michael I.
2013-01-01
Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources, such as time or space. Our approach to this problem is to define a notion of “algorithmic weakening,” in which a hierarchy of algorithms is ordered by both computational efficiency and statistical efficiency, allowing the growing strength of the data at scale to be traded off against the need for sophisticated processing. We illustrate this approach in the setting of denoising problems, using convex relaxation as the core inferential tool. Hierarchies of convex relaxations have been widely used in theoretical computer science to yield tractable approximation algorithms to many computationally intractable tasks. In the current paper, we show how to endow such hierarchies with a statistical characterization and thereby obtain concrete tradeoffs relating algorithmic runtime to amount of data. PMID:23479655
GenMin: An enhanced genetic algorithm for global optimization
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, I. E.
2008-06-01
A new method that employs grammatical evolution and a stopping rule for finding the global minimum of a continuous multidimensional, multimodal function is considered. The genetic algorithm used is a hybrid genetic algorithm in conjunction with a local search procedure. We list results from numerical experiments with a series of test functions and we compare with other established global optimization methods. The accompanying software accepts objective functions coded either in Fortran 77 or in C++. Program summaryProgram title: GenMin Catalogue identifier: AEAR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 810 No. of bytes in distributed program, including test data, etc.: 436 613 Distribution format: tar.gz Programming language: GNU-C++, GNU-C, GNU Fortran 77 Computer: The tool is designed to be portable in all systems running the GNU C++ compiler Operating system: The tool is designed to be portable in all systems running the GNU C++ compiler RAM: 200 KB Word size: 32 bits Classification: 4.9 Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a nonlinear system of equations via optimization, employing a least squares type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Solution method: Grammatical evolution and a stopping rule. Running time: Depending on the objective function. The test example given takes only a few seconds to run.
Chiral metamaterial design using optimized pixelated inclusions with genetic algorithm
NASA Astrophysics Data System (ADS)
Akturk, Cemal; Karaaslan, Muharrem; Ozdemir, Ersin; Ozkaner, Vedat; Dincer, Furkan; Bakir, Mehmet; Ozer, Zafer
2015-03-01
Chiral metamaterials have been a research area for many researchers due to their polarization rotation properties on electromagnetic waves. However, most of the proposed chiral metamaterials are designed depending on experience or time-consuming inefficient simulations. A method is investigated for designing a chiral metamaterial with a strong and natural chirality admittance by optimizing a grid of metallic pixels through both sides of a dielectric sheet placed perpendicular to the incident wave by using a genetic algorithm (GA) technique based on finite element method solver. The effective medium parameters are obtained by using constitutive equations and S parameters. The proposed methodology is very efficient for designing a chiral metamaterial with the desired effective medium parameters. By using GA-based topology, it is proven that a chiral metamaterial can be designed and manufactured more easily and with a low cost.
An algorithmically optimized combinatorial library screened by digital imaging spectroscopy.
Goldman, E R; Youvan, D C
1992-12-01
Combinatorial cassettes based on a phylogenetic "target set" were used to simultaneously mutagenize seven amino acid residues on one face of a transmembrane alpha helix comprising a bacteriochlorophyll binding site in the light harvesting II antenna of Rhodobacter capsulatus. This pigmented protein provides a model system for developing complex mutagenesis schemes, because simple absorption spectroscopy can be used to assay protein expression, structure, and function. Colony screening by Digital Imaging Spectroscopy showed that 6% of the optimized library bound bacteriochlorophyll in two distinct spectroscopic classes. This is approximately 200 times the throughput (ca. 0.03%) of conventional combinatorial cassette mutagenesis using [NN(G/C)]. "Doping" algorithms evaluated in this model system are generally applicable and should enable simultaneous mutagenesis at more positions in a protein than currently possible, or alternatively, decrease the screening size of combinatorial libraries. PMID:1369205
An optimization-based iterative algorithm for recovering fluorophore location
NASA Astrophysics Data System (ADS)
Yi, Huangjian; Peng, Jinye; Jin, Chen; He, Xiaowei
2015-10-01
Fluorescence molecular tomography (FMT) is a non-invasive technique that allows three-dimensional visualization of fluorophore in vivo in small animals. In practical applications of FMT, however, there are challenges in the image reconstruction since it is a highly ill-posed problem due to the diffusive behaviour of light transportation in tissue and the limited measurement data. In this paper, we presented an iterative algorithm based on an optimization problem for three dimensional reconstruction of fluorescent target. This method alternates weighted algebraic reconstruction technique (WART) with steepest descent method (SDM) for image reconstruction. Numerical simulations experiments and physical phantom experiment are performed to validate our method. Furthermore, compared to conjugate gradient method, the proposed method provides a better three-dimensional (3D) localization of fluorescent target.
Maximizing microbial perchlorate degradation using a genetic algorithm: consortia optimization.
Kucharzyk, Katarzyna H; Soule, Terence; Hess, Thomas F
2013-09-01
Microorganisms in consortia perform many tasks more effectively than individual organisms and in addition grow more rapidly and in greater abundance. In this work, experimental datasets were assembled consisting of all possible selected combinations of perchlorate reducing strains of microorganisms and their perchlorate degradation rates were evaluated. A genetic algorithm (GA) methodology was successfully applied to define sets of microbial strains to achieve maximum rates of perchlorate degradation. Over the course of twenty generations of optimization using a GA, we saw a statistically significant 2.06 and 4.08-fold increase in average perchlorate degradation rates by consortia constructed using solely the perchlorate reducing bacteria (PRB) and by consortia consisting of PRB and accompanying organisms that did not degrade perchlorate, respectively. The comparison of kinetic rates constant in two types of microbial consortia additionally showed marked increases. PMID:23229741
Optimization and implementation of piezoelectric radiators using the genetic algorithm
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Huang, Chinghong
2003-06-01
Very thin and small (45 mmÃ—35 mmÃ—0.35 mm) piezoelectric radiators have been developed in this research. The system is modeled by using the energy method in conjunction with the assumed-modes method. Electrical system, mechanical system, and acoustic loading have all been accounted for during the modeling stage. On the basis of the simulation model, the genetic algorithm (GA) is employed to optimize the overall configurations for a low resonance frequency and a large gain. The resulting designs are then implemented and evaluated experimentally. Performance indices for the experimental evaluation include the frequency response, the directional response, the sensitivity, and the efficiency. It is found in the experimental results that the piezoelectric radiators are able to produce comparable acoustical output with significantly less electrical input than the voice-coil panel speakers.
Optimization and implementation of piezoelectric radiators using the genetic algorithm.
Bai, Mingsian R; Huang, Chinghong
2003-06-01
Very thin and small (45 mm x 35 mm x 0.35 mm) piezoelectric radiators have been developed in this research. The system is modeled by using the energy method in conjunction with the assumed-modes method. Electrical system, mechanical system, and acoustic loading have all been accounted for during the modeling stage. On the basis of the simulation model, the genetic algorithm (GA) is employed to optimize the overall configurations for a low resonance frequency and a large gain. The resulting designs are then implemented and evaluated experimentally. Performance indices for the experimental evaluation include the frequency response, the directional response, the sensitivity, and the efficiency. It is found in the experimental results that the piezoelectric radiators are able to produce comparable acoustical output with significantly less electrical input than the voice-coil panel speakers. PMID:12822792
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Liu, Liqiang; Dai, Yuntao; Gao, Jinyu
2014-01-01
Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications. PMID:25059256
Optimization of Electrical Energy Production by using Modified Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Glotic, Arnel
The dissertation addressed the optimization of electrical energy production from hydro power plants and thermal power plants. It refers to short-term optimization and presents a complex optimization problem. The complexity of the problem arises from an extensive number of co-dependent variables and power plant constraints. According to the complexity of the problem, the differential evolution algorithm known as the successful and robust optimization algorithm was selected as an appropriate algorithm for optimization. The performance of this differential evolution algorithm is closely connected with a control parameters' set and its capabilities being inter alia improved by the algorithm's parallelization. The capabilities of achieving a global optimal solution within the optimization of electrical energy production are improved by the proposed modified differential evolution algorithm with new parallelization mode. This algorithm's performance is also improved by its proposed dynamic population size throughout the optimization process. In addition to achieving better optimization results in comparison with the classic differential evolution algorithm, the proposed dynamic population size reduces convergence time. The improvements of this algorithm presented in the dissertation, besides power plant models mostly used in scientific publications, were also tested on the power plant models represented by real parameters'. The optimization of electrical energy from hydro and thermal power plants is followed by certain criteria; satisfying system demand, minimizing usage of water quantity per produced electrical energy unit, minimizing or eliminating water spillage, satisfying the final reservoir states of hydro power plants and minimizing fuel costs and emissions of thermal power plants.
An Improved Genetic Algorithm for Pipe Network Optimization
NASA Astrophysics Data System (ADS)
Dandy, Graeme C.; Simpson, Angus R.; Murphy, Laurence J.
1996-02-01
An improved genetic algorithm (GA) formulation for pipe network optimization has been developed. The new GA uses variable power scaling of the fitness function. The exponent introduced into the fitness function is increased in magnitude as the GA computer run proceeds. In addition to the more commonly used bitwise mutation operator, an adjacency or creeping mutation operator is introduced. Finally, Gray codes rather than binary codes are used to represent the set of decision variables which make up the pipe network design. Results are presented comparing the performance of the traditional or simple GA formulation and the improved GA formulation for the New York City tunnels problem. The case study results indicate the improved GA performs significantly better than the simple GA. In addition, the improved GA performs better than previously used traditional optimization methods such as linear, dynamic, and nonlinear programming methods and an enumerative search method. The improved GA found a solution for the New York tunnels problem which is the lowest-cost feasible discrete size solution yet presented in the literature.
A homogeneous superconducting magnet design using a hybrid optimization algorithm
NASA Astrophysics Data System (ADS)
Ni, Zhipeng; Wang, Qiuliang; Liu, Feng; Yan, Luguang
2013-12-01
This paper employs a hybrid optimization algorithm with a combination of linear programming (LP) and nonlinear programming (NLP) to design the highly homogeneous superconducting magnets for magnetic resonance imaging (MRI). The whole work is divided into two stages. The first LP stage provides a global optimal current map with several non-zero current clusters, and the mathematical model for the LP was updated by taking into account the maximum axial and radial magnetic field strength limitations. In the second NLP stage, the non-zero current clusters were discretized into practical solenoids. The superconducting conductor consumption was set as the objective function both in the LP and NLP stages to minimize the construction cost. In addition, the peak-peak homogeneity over the volume of imaging (VOI), the scope of 5 Gauss fringe field, and maximum magnetic field strength within superconducting coils were set as constraints. The detailed design process for a dedicated 3.0 T animal MRI scanner was presented. The homogeneous magnet produces a magnetic field quality of 6.0 ppm peak-peak homogeneity over a 16 cm by 18 cm elliptical VOI, and the 5 Gauss fringe field was limited within a 1.5 m by 2.0 m elliptical region.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. PMID:23958491
NASA Astrophysics Data System (ADS)
Soares, Claudia; Xavier, Joao; Gomes, Joao
2015-09-01
We address the sensor network localization problem given noisy range measurements between pairs of nodes. We approach the non-convex maximum-likelihood formulation via a known simple convex relaxation. We exploit its favorable optimization properties to the full to obtain an approach that: is completely distributed, has a simple implementation at each node, and capitalizes on an optimal gradient method to attain fast convergence. We offer a parallel but also an asynchronous flavor, both with theoretical convergence guarantees and iteration complexity analysis. Experimental results establish leading performance. Our algorithms top the accuracy of a comparable state of the art method by one order of magnitude, using one order of magnitude fewer communications.
NASA Astrophysics Data System (ADS)
Tolson, Bryan A.; Asadzadeh, Masoud; Maier, Holger R.; Zecchin, Aaron
2009-12-01
The dynamically dimensioned search (DDS) continuous global optimization algorithm by Tolson and Shoemaker (2007) is modified to solve discrete, single-objective, constrained water distribution system (WDS) design problems. The new global optimization algorithm for WDS optimization is called hybrid discrete dynamically dimensioned search (HD-DDS) and combines two local search heuristics with a discrete DDS search strategy adapted from the continuous DDS algorithm. The main advantage of the HD-DDS algorithm compared with other heuristic global optimization algorithms, such as genetic and ant colony algorithms, is that its searching capability (i.e., the ability to find near globally optimal solutions) is as good, if not better, while being significantly more computationally efficient. The algorithm's computational efficiency is due to a number of factors, including the fact that it is not a population-based algorithm and only requires computationally expensive hydraulic simulations to be conducted for a fraction of the solutions evaluated. This paper introduces and evaluates the algorithm by comparing its performance with that of three other algorithms (specific versions of the genetic algorithm, ant colony optimization, and particle swarm optimization) on four WDS case studies (21- to 454-dimensional optimization problems) on which these algorithms have been found to perform well. The results obtained indicate that the HD-DDS algorithm outperforms the state-of-the-art existing algorithms in terms of searching ability and computational efficiency. In addition, the algorithm is easier to use, as it does not require any parameter tuning and automatically adjusts its search to find good solutions given the available computational budget.
Optimization of the double dosimetry algorithm for interventional cardiologists
NASA Astrophysics Data System (ADS)
Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena
2014-11-01
A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.
The optimal extraction of feature algorithm based on KAZE
NASA Astrophysics Data System (ADS)
Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng
2015-10-01
As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of ?/3 in the circular area of radius 6? with a sampling step of size ? one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.
Ultra-fast fluence optimization for beam angle selection algorithms
NASA Astrophysics Data System (ADS)
Bangert, M.; Ziegenhein, P.; Oelfke, U.
2014-03-01
Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.
Optimized Uncertainty Quantification Algorithm Within a Dynamic Event Tree Framework
J. W. Nielsen; Akira Tokuhiro; Robert Hiromoto
2014-06-01
Methods for developing Phenomenological Identification and Ranking Tables (PIRT) for nuclear power plants have been a useful tool in providing insight into modelling aspects that are important to safety. These methods have involved expert knowledge with regards to reactor plant transients and thermal-hydraulic codes to identify are of highest importance. Quantified PIRT provides for rigorous method for quantifying the phenomena that can have the greatest impact. The transients that are evaluated and the timing of those events are typically developed in collaboration with the Probabilistic Risk Analysis. Though quite effective in evaluating risk, traditional PRA methods lack the capability to evaluate complex dynamic systems where end states may vary as a function of transition time from physical state to physical state . Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. A limitation of DPRA is its potential for state or combinatorial explosion that grows as a function of the number of components; as well as, the sampling of transition times from state-to-state of the entire system. This paper presents a method for performing QPIRT within a dynamic event tree framework such that timing events which result in the highest probabilities of failure are captured and a QPIRT is performed simultaneously while performing a discrete dynamic event tree evaluation. The resulting simulation results in a formal QPIRT for each end state. The use of dynamic event trees results in state explosion as the number of possible component states increases. This paper utilizes a branch and bound algorithm to optimize the solution of the dynamic event trees. The paper summarizes the methods used to implement the branch-and-bound algorithm in solving the discrete dynamic event trees.
Gradient projection algorithm for relaxation methods
Mohammed, J.L.; Hummel, R.A.; Zucker, S.W.
1983-05-01
This paper examines a particular problem which arises when applying the method of gradient projection for solving constrained optimization and finite dimensional variational inequalities on the convex set formed by the convex hull of the standard basis unit vectors. The method is especially important for relaxation labeling techniques applied to problems in artificial intelligence. Zoutendijk's method for finding feasible directions, which is relatively complicated in general situations, yields a very simple finite algorithm for this problem. The authors present an extremely simple algorithm for performing the gradient projection and an independent verification of its correctness. 8 references.
SOPRA: Scaffolding algorithm for paired reads via statistical optimization
2010-01-01
Background High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. Results We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various rearrangement errors. Conclusions Applying SOPRA to real data from bacterial genomes, we were able to assemble contigs into scaffolds of significant length (N50 up to 200 Kb) with very few errors introduced in the process. In general, the methodology presented here will allow better scaffold assemblies of any type of mate pair sequencing data. PMID:20576136
NASA Astrophysics Data System (ADS)
Žilinskas, Antanas; Žilinskas, Julius
2015-04-01
A bi-objective optimization problem with Lipschitz objective functions is considered. An algorithm is developed adapting a univariate one-step optimal algorithm to multidimensional problems. The univariate algorithm considered is a worst-case optimal algorithm for Lipschitz functions. The multidimensional algorithm is based on the branch-and-bound approach and trisection of hyper-rectangles which cover the feasible region. The univariate algorithm is used to compute the Lipschitz bounds for the Pareto front. Some numerical examples are included.
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
Yu, Xiaobing; Cao, Jie; Shan, Haiyan; Zhu, Li; Guo, Jun
2014-01-01
Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail. PMID:24688370
Optimization on robot arm machining by using genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Chen, Chiu-Hung; Tsai, Shang-En
2007-12-01
In this study, an optimization problem on the robot arm machining is formulated and solved by using genetic algorithms (GAs). The proposed approach adopts direct kinematics model and utilizes GA's global search ability to find the optimum solution. The direct kinematics equations of the robot arm are formulated and can be used to compute the end-effector coordinates. Based on these, the objective of optimum machining along a set of points can be evolutionarily evaluated with the distance between machining points and end-effector positions. Besides, a 3D CAD application, CATIA, is used to build up the 3D models of the robot arm, work-pieces and their components. A simulated experiment in CATIA is used to verify the computation results first and a practical control on the robot arm through the RS232 port is also performed. From the results, this approach is proved to be robust and can be suitable for most machining needs when robot arms are adopted as the machining tools.
Optimal placement of active material actuators using genetic algorithm
NASA Astrophysics Data System (ADS)
Johnson, Terrence; Frecker, Mary I.
2004-07-01
Actuators based on smart materials generally exhibit a tradeoff between force and stroke. Researchers have surrounded piezoelectric materials (PZT"s) with complaint structures to magnify either their geometric or mechanical advantage. Most of these designs are literally built around a particular piezoelectric device, so the design space consists of only the compliant mechanism. Materials scientists researchers have demonstrated the ability to pole a PZT in an arbitrary direction, and some engineers have taken advantage of this to build "shear mode" actuators. The goal of this work is to determine if the performance of compliant mechanisms improves by the inclusion of the piezoelectric polarization as a design variable. The polarization vector is varied via transformation matrixes, and the compliant actuator is modeled using the SIMP (Solid Isotropic Material with Penalization) or "power-law method." The concept of mutual potential energy is used to form an objective function to measure the piezoelectric actuator"s performance. The optimal topology of the compliant mechanism and orientation of the polarization method are determined using a sequential linear programming algorithm. This paper presents a demonstration problem that shows small changes in the polarization vector have a marginal effect on the optimum topology of the mechanism, but improves actuation.
Optimization of process parameters in stereolithography using genetic algorithm
NASA Astrophysics Data System (ADS)
Chockalingam, K.; Jawahar, N.; Vijaybabu, E. R.
2003-10-01
Stereolithography is the most popular RP process in which intricate models are directly constructed from a CAD package by polymerizing a plastic monomer. The application range is still limited, because dimensional accuracy is still inferior to that of conventional machining process. The ultimate dimensional accuracy of a part built on a layer-by-layer basis depends on shrinkage which depend on many factors such as layer thickness, hatch spacing, hatch style, hatch over cure and fill cure depth. The influence of the above factors on shrinkage in X and Y directions fit to the nonlinear pattern. A particular combination of process variables that would result same shrinkage rate in both directions would enable to predict shrinkage allowance to be provided on a part and hence the CAD model could be constructed including shrinkage allowance. In this concern, the objective of the present work is set as determination of process parameters to have same shrinkage rate in both X and Y directions. A genetic algorithm (GA) is proposed to find optimal process parameters for the above objective. This approach is an analytical approach with experimental sample data and has great potential to predict process parameters for better dimensional accuracy in stereolithography process.
New Umkehr ozone profile retrieval algorithm optimized for climatological studies
NASA Astrophysics Data System (ADS)
Petropavlovskikh, I.; Bhartia, P. K.; DeLuisi, J.
2005-08-01
We present a new Umkehr ozone profile retrieval algorithm (UMK04) that has been optimized for the study of monthly mean anomalies (MMA) to assess climate variability in multi-year time series. Although the Umkehr technique is too noisy to monitor short-term variability in atmospheric ozone, it is capable of monitoring long-term changes in MMA with less then 5% uncertainty in the stratosphere, and with no influence from a priori information. By examining the information content of UMK04 we conclude that Umkehr data contain useful information about long-term ozone trend down to the surface, provided the data are analyzed as column ozone amounts in 8-layers, consisting of two ~9.6 km layers in the lower atmosphere (253-1013, 63-253 hPa), five ~4.8 km layers (32-63, 16-32, 8-16, 4-8, 2-4 hPa) in the stratosphere, plus a broad top layer spanning from 0-4 hPa.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085
NASA Astrophysics Data System (ADS)
Chen, Jing; Liu, Tundong; Jiang, Hao
2016-01-01
A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.
Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao
2014-09-01
Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. PMID:24613939
NASA Astrophysics Data System (ADS)
Zhang, B.; Qi, H.; Ren, Y. T.; Sun, S. C.; Ruan, L. M.
2014-01-01
As a heuristic intelligent optimization algorithm, the Ant Colony Optimization (ACO) algorithm was applied to the inverse problem of a one-dimensional (1-D) transient radiative transfer in present study. To illustrate the performance of this algorithm, the optical thickness and scattering albedo of the 1-D participating slab medium were retrieved simultaneously. The radiative reflectance simulated by Monte-Carlo Method (MCM) and Finite Volume Method (FVM) were used as measured and estimated value for the inverse analysis, respectively. To improve the accuracy and efficiency of the Basic Ant Colony Optimization (BACO) algorithm, three improved ACO algorithms, i.e., the Region Ant Colony Optimization algorithm (RACO), Stochastic Ant Colony Optimization algorithm (SACO) and Homogeneous Ant Colony Optimization algorithm (HACO), were developed. By the HACO algorithm presented, the radiative parameters could be estimated accurately, even with noisy data. In conclusion, the HACO algorithm is demonstrated to be effective and robust, which had the potential to be implemented in various fields of inverse radiation problems.
An algorithm to select the optimal program based on rough sets and fuzzy soft sets.
Wenjun, Liu; Qingguo, Li
2014-01-01
Combining rough sets and fuzzy soft sets, we propose an algorithm to obtain the optimal decision program. In this algorithm, firstly, according to fuzzy soft sets, we build up information systems; secondly, we compute the significance of each parameter according to rough set theory; thirdly, combining subjective bias, we give an algorithm to obtain the comprehensive weight of each parameter; at last, we put forward a method to choose the optimal program. Example shows that the optimal algorithm is effective and rational. PMID:25243212
Optimized Phase Generated Carrier (PGC) demodulation algorithm insensitive to C value
NASA Astrophysics Data System (ADS)
Wu, B.; Yuan, Y.; Yang, J.; Liang, S.; Yuan, L.
2015-07-01
An optimized phase generated carrier (PGC) demodulation algorithm is proposed for signal demodulation of interferometer. Similar to the traditional PGC algorithm, this optimized algorithm also need to adopt differential cross multiply (DCM), divides the two signals which processed by differential cross-multiplying could get the square of the tangent function of the output phase, output phase can be obtained by the corresponding calculation. The output of the optimization algorithm has no related items of modulated amplitude (C value) and interference signal AC amplitude (B value), therefor the demodulation error caused by C value and B value fluctuation could be suppressed.
Hu, Y.; Liu, Z.; Shi, X.; Wang, B.
2006-07-01
A brief introduction of characteristic statistic algorithm (CSA) is given in the paper, which is a new global optimization algorithm to solve the problem of PWR in-core fuel management optimization. CSA is modified by the adoption of back propagation neural network and fast local adjustment. Then the modified CSA is applied to PWR Equilibrium Cycle Reloading Optimization, and the corresponding optimization code of CSA-DYW is developed. CSA-DYW is used to optimize the equilibrium cycle of 18 month reloading of Daya bay nuclear plant Unit 1 reactor. The results show that CSA-DYW has high efficiency and good global performance on PWR Equilibrium Cycle Reloading Optimization. (authors)
Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220
A Multistrategy Optimization Improved Artificial Bee Colony Algorithm
Liu, Wen
2014-01-01
Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924
A mesh optimization algorithm to decrease the maximum error in finite element computations.
Knupp, Patrick Michael; Hetmaniuk, Ulrich L.
2008-06-01
We present a mesh optimization algorithm for adaptively improving the finite element interpolation of a function of interest. The algorithm minimizes an objective function by swapping edges and moving nodes. Numerical experiments are performed on model problems. The results illustrate that the mesh optimization algorithm can reduce the W{sup 1,{infinity}} semi-norm of the interpolation error. For these examples, the L{sup 2}, L{sup {infinity}}, and H{sup 1} norms decreased also.
Optimization of band gaps of 2D photonic crystals by the rapid generic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yun-tao
2011-01-01
Based on the rapid genetic algorithm (RGA), the band gap structures of square lattices with square scatters are optimized. In the optimizing process, gene codes are used to express square scatters and the fitting function adopts the relative values of the largest absolute photonic band gaps (PBGs). By changing the value of filling factor, three cell forms with large photonic band gaps are obtained. In addition, the comparison between the rapid genetic algorithm and the general genetic algorithm (GGA) is analyzed.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
NASA Astrophysics Data System (ADS)
Hou, Rui; Yu, Junle
2011-12-01
Optical burst switching (OBS) has been regarded as the next generation optical switching technology. In this paper, the routing problem based on particle swarm optimization (PSO) algorithm in OBS has been studies and analyzed. Simulation results indicate that, the PSO based routing algorithm will optimal than the conversional shortest path first algorithm in space cost and calculation cost. Conclusions have certain theoretical significances for the improvement of OBS routing protocols.
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
NASA Astrophysics Data System (ADS)
Yoshimaru, Eriko S.; Randtke, Edward A.; Pagel, Mark D.; CÃ¡rdenas-RodrÃguez, Julio
2016-02-01
Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners.
Yoshimaru, Eriko S; Randtke, Edward A; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2016-02-01
Pulsed Chemical Exchange Saturation Transfer (CEST) MRI experimental parameters and RF saturation pulse shapes were optimized using a multiobjective genetic algorithm. The optimization was carried out for RF saturation duty cycles of 50% and 90%, and results were compared to continuous wave saturation and Gaussian waveform. In both simulation and phantom experiments, continuous wave saturation performed the best, followed by parameters and shapes optimized by the genetic algorithm and then followed by Gaussian waveform. We have successfully demonstrated that the genetic algorithm is able to optimize pulse CEST parameters and that the results are translatable to clinical scanners. PMID:26778301
Deb, Suash; Yang, Xin-She
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Rate distortion optimization for H.264 interframe coding: a general framework and algorithms.
Yang, En-Hui; Yu, Xiang
2007-07-01
Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms--a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software. PMID:17605376
Low emittance lattice optimization using a multi-objective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Gao, Wei-Wei; Wang, Lin; Li, Wei-Min; He, Duo-Hui
2011-09-01
A low emittance lattice design and optimization procedure are systematically studied with a non-dominated sorting-based multi-objective evolutionary algorithm which not only globally searches the low emittance lattice, but also optimizes some beam quantities such as betatron tunes, momentum compaction factor and dispersion function simultaneously. In this paper the detailed algorithm and lattice design procedure are presented. The Hefei light source upgrade project storage ring lattice, with fixed magnet layout, is designed to illustrate this optimization procedure.
NASA Astrophysics Data System (ADS)
Chen, K.
1998-09-01
We propose a general learning algorithm for solving optimization problems, based on a simple strategy of trial and adaptation. The algorithm maintains a probability distribution of possible solutions (configurations), which is updated continuously in the learning process. As the probability distribution evolves, better and better solutions are shown to emerge. The performance of the algorithm is illustrated by the application to the problem of finding the ground state of the Ising spin glass. A simple theoretical understanding of the algorithm is also presented.
Optimal support arrangement of piping systems using genetic algorithm
Chiba, T.; Okado, S.; Fujii, I.; Itami, K.; Hara, F.
1996-11-01
The support arrangement is one of the important factors in the design of piping systems. Much time is required to decide the arrangement of the supports. The authors applied a genetic algorithm to find the optimum support arrangement for piping systems. Examples are provided to illustrate the effectiveness of the genetic algorithm. Good results are obtained when applying the genetic algorithm to the actual designing of the piping system.
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
Research on laser marking speed optimization by using genetic algorithm.
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
A new multiobjective performance criterion used in PID tuning optimization algorithms
Sahib, Mouayad A.; Ahmed, Bestoun S.
2015-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978
A new multiobjective performance criterion used in PID tuning optimization algorithms.
Sahib, Mouayad A; Ahmed, Bestoun S
2016-01-01
In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions. PMID:26843978
Solution of transient optimization problems by using an algorithm based on nonlinear programming
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
An algorithm is presented for solution of dynamic optimization problems which are nonlinear in the state variables and linear in the control variables. It is shown that the optimal control is bang-bang. A nominal bang-bang solution is found which satisfies the system equations and constraints, and influence functions are generated which check the optimality of the solution. Nonlinear optimization (gradient search) techniques are used to find the optimal solution. The algorithm is used to find a minimum time acceleration for a turbofan engine.
Solution of transient optimization problems by using an algorithm based on nonlinear programming
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
A new algorithm is presented for solution of dynamic optimization problems which are nonlinear in the state variables and linear in the control variables. It is shown that the optimal control is bang-bang. A nominal bang-bang solution is found which satisfies the system equations and constraints, and influence functions are generated which check the optimality of the solution. Nonlinear optimization (gradient search) techniques are used to find the optimal solution. The algorithm is used to find a minimum time acceleration for a turbofan engine.
Dynamic topology multi force particle swarm optimization algorithm and its application
NASA Astrophysics Data System (ADS)
Chen, Dongning; Zhang, Ruixing; Yao, Chengyu; Zhao, Zheyu
2016-01-01
Particle swarm optimization (PSO) algorithm is an effective bio-inspired algorithm but it has shortage of premature convergence. Researchers have made some improvements especially in force rules and population topologies. However, the current algorithms only consider a single kind of force rules and lack consideration of comprehensive improvement in both multi force rules and population topologies. In this paper, a dynamic topology multi force particle swarm optimization (DTMFPSO) algorithm is proposed in order to get better search performance. First of all, the principle of the presented multi force particle swarm optimization (MFPSO) algorithm is that different force rules are used in different search stages, which can balance the ability of global and local search. Secondly, a fitness-driven edge-changing (FE) topology based on the probability selection mechanism of roulette method is designed to cut and add edges between the particles, and the DTMFPSO algorithm is proposed by combining the FE topology with the MFPSO algorithm through concurrent evolution of both algorithm and structure in order to further improve the search accuracy. Thirdly, Benchmark functions are employed to evaluate the performance of the DTMFPSO algorithm, and test results show that the proposed algorithm is better than the well-known PSO algorithms, such as µPSO, MPSO, and EPSO algorithms. Finally, the proposed algorithm is applied to optimize the process parameters for ultrasonic vibration cutting on SiC wafer, and the surface quality of the SiC wafer is improved by 12.8% compared with the PSO algorithm in Ref. [25]. This research proposes a DTMFPSO algorithm with multi force rules and dynamic population topologies evolved simultaneously, and it has better search performance.
Optimization of meander line antennas for RFID applications by using genetic algorithm
NASA Astrophysics Data System (ADS)
Bucuci, Stefania C.; Anchidin, Liliana; Dumitrascu, Ana; Danisor, Alin; Berescu, Serban; Tamas, Razvan D.
2015-02-01
In this paper, we propose an approach of optimization of meander line antennas by using genetic algorithm. Such antennas are used in RFID applications. As opposed to other approaches for meander antennas, we propose the use of only two optimization objectives, i.e. gain and size. As an example, we have optimized a single meander dipole antenna, resonating at 869 MHz.
An average-reward reinforcement learning algorithm for computing bias-optimal policies
Mahadevan, S.
1996-12-31
Average-reward reinforcement learning (ARL) is an undiscounted optimality framework that is generally applicable to a broad range of control tasks. ARL computes gain-optimal control policies that maximize the expected payoff per step. However, gain-optimality has some intrinsic limitations as an optimality criterion, since for example, it cannot distinguish between different policies that all reach an absorbing goal state, but incur varying costs. A more selective criterion is bias optimality, which can filter gain-optimal policies to select those that reach absorbing goals with the minimum cost. While several ARL algorithms for computing gain-optimal policies have been proposed, none of these algorithms can guarantee bias optimality, since this requires solving at least two nested optimality equations. In this paper, we describe a novel model-based ARL algorithm for computing bias-optimal policies. We test the proposed algorithm using an admission control queuing system, and show that it is able to utilize the queue much more efficiently than a gain-optimal method by learning bias-optimal policies.
What Does Digital Straightness Tell about Digital Convexity?
NASA Astrophysics Data System (ADS)
Roussillon, Tristan; Tougne, Laure; Sivignon, Isabelle
The paper studies local convexity properties of parts of digital boundaries. An online and linear-time algorithm is introduced for the decomposition of a digital boundary into convex and concave parts. In addition, other data are computed at the same time without any extra cost: the hull of each convex or concave part as well as the Bezout points of each edge of those hulls. The proposed algorithm involves well-understood algorithms: adding a point to the front or removing a point from the back of a digital straight segment and computing the set of maximal segments. The output of the algorithm is useful either for a polygonal representation of digital boundaries or for a segmentation into circular arcs.
NASA Astrophysics Data System (ADS)
Chao, Shih-Min; Whang, Allen Jong-Woei; Chou, Chun-Han; Su, Wei-Shao; Hsieh, Tsung-Heng
2014-03-01
In this paper, we propose a new method for optimization of a total internal reflection (TIR) lens by using a hybrid Taguchi-simulated annealing algorithm. The conventional simulated annealing (SA) algorithm is a method for solving global optimization problems and has also been used in non-imaging systems in recent years. However, the success of SA depends heavily on the annealing schedule and initial parameter setting. In this study, we successfully incorporated the Taguchi method into the SA algorithm. The new hybrid Taguchi-simulated annealing algorithm provides more precise search results and has lower initial parameter dependence.
Yang, Haijun; Wu, Hao; Li, Dawei; Han, Li; Huo, Shuanghong
2007-01-01
In this paper we present a method to calculate a temperature-dependent optimized conformational transition pathways. This method is based on the maximization of the flux derived from the Smoluchowski equation and is implemented with a probabilistic roadmap algorithm. We have tested the algorithm on four systems [Formula: see text] the Müller potential, the three-hole potential, alanine dipeptide, and the folding of ?-hairpin. Comparison is made with existing algorithms designed for the calculation of protein conformational transition and folding pathways. The applications demonstrate the ability of the algorithm to isolate a temperature-dependent optimal reaction path with improved sampling and efficiency. PMID:26627147
A VLSI optimal constructive algorithm for classification problems
Beiu, V.; Draghici, S.; Sethi, I.K.
1997-10-01
If neural networks are to be used on a large scale, they have to be implemented in hardware. However, the cost of the hardware implementation is critically sensitive to factors like the precision used for the weights, the total number of bits of information and the maximum fan-in used in the network. This paper presents a version of the Constraint Based Decomposition training algorithm which is able to produce networks using limited precision integer weights and units with limited fan-in. The algorithm is tested on the 2-spiral problem and the results are compared with other existing algorithms.
Yang, Chao; Meza, Juan C.; Wang, Lin-Wang
2005-07-26
A new direct constrained optimization algorithm forminimizing the Kohn-Sham (KS) total energy functional is presented inthis paper. The key ingredients of this algorithm involve projecting thetotal energy functional into a sequences of subspaces of small dimensionsand seeking the minimizer of total energy functional within eachsubspace. The minimizer of a subspace energy functional not only providesa search direction along which the KS total energy functional decreasesbut also gives an optimal "step-length" to move along this searchdirection. A numerical example is provided to demonstrate that this newdirect constrained optimization algorithm can be more efficient than theself-consistent field (SCF) iteration.
Yang Chao . E-mail: cyang@lbl.gov; Meza, Juan C.; Wang Linwang
2006-09-20
A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration.
A two-level trajectory decomposition algorithm featuring optimal intermediate target selection
NASA Technical Reports Server (NTRS)
Petersen, F. M.; Cornick, D. E.; Brauer, G. L.; Rehder, J. R.
1977-01-01
A decomposition algorithm is presented which optimizes complex missions by partitioning the trajectory into natural segments such as ascent or entry. Each segment defines a full-rank targeting subproblem. These are solved sequentially using the Newton-Raphson algorithm. The master problem, representing the complete mission, is to determine subproblem targets and master-problem controls that optimize the mission objective subject to intersegment constraints. The gradient projection algorithm solves this problem using derivatives obtained analytically from finite-difference subproblem sensitivities. Thus, the mission is optimized by coordinating the solution of tractible subproblems. Computational results for a synchronous equatorial mission are included.
A two-level trajectory decomposition algorithm featuring optimal intermediate target selection
NASA Technical Reports Server (NTRS)
Petersen, F. M.; Cornick, D. E.; Brauer, G. L.; Rehder, J. R.
1975-01-01
A decomposition algorithm is presented that optimizes complex missions by partitioning the trajectory into natural segments such as ascent or entry. Each segment defines a full-rank targeting subproblem. These are solved sequentially using the Newton-Raphson algorithm. The master problem, representing the complete mission, is to determine subproblem targets and master-problem controls that optimize the mission objective subject to intersegment constraints. The gradient projection algorithm solves this problem using derivatives obtained analytically from finite-difference subproblem sensitivities. Thus, the mission is optimized by coordinating the solution of tractible subproblems. Computational results for a synchronous equatorial mission are included.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
An overview on optimized NLMS algorithms for acoustic echo cancellation
NASA Astrophysics Data System (ADS)
Paleologu, Constantin; CiochinÄƒ, Silviu; Benesty, Jacob; Grant, Steven L.
2015-12-01
Acoustic echo cancellation represents one of the most challenging system identification problems. The most used adaptive filter in this application is the popular normalized least mean square (NLMS) algorithm, which has to address the classical compromise between fast convergence/tracking and low misadjustment. In order to meet these conflicting requirements, the step-size of this algorithm needs to be controlled. Inspired by the pioneering work of Prof. E. HÃ¤nsler and his collaborators on this fundamental topic, we present in this paper several solutions to control the adaptation of the NLMS adaptive filter. The developed algorithms are "non-parametric" in nature, i.e., they do not require any additional features to control their behavior. Simulation results indicate the good performance of the proposed solutions and support the practical applicability of these algorithms.
Efficient Algorithm for Optimizing Adaptive Quantum Metrology Processes
NASA Astrophysics Data System (ADS)
Hentschel, Alexander; Sanders, Barry C.
2011-12-01
Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.
NASA Astrophysics Data System (ADS)
Cash, M. D.; Wrobel, J. S.; Cosentino, K. C.; Reinard, A. A.
2014-06-01
Human evaluation of solar wind data for interplanetary (IP) shock identification relies on both heuristics and pattern recognition, with the former lending itself to algorithmic representation and automation. Such detection algorithms can potentially alert forecasters of approaching shocks, providing increased warning of subsequent geomagnetic storms. However, capturing shocks with an algorithmic treatment alone is challenging, as past and present work demonstrates. We present a statistical analysis of 209 IP shocks observed at L1, and we use this information to optimize a set of shock identification criteria for use with an automated solar wind shock detection algorithm. In order to specify ranges for the threshold values used in our algorithm, we quantify discontinuities in the solar wind density, velocity, temperature, and magnetic field magnitude by analyzing 8 years of IP shocks detected by the SWEPAM and MAG instruments aboard the ACE spacecraft. Although automatic shock detection algorithms have previously been developed, in this paper we conduct a methodical optimization to refine shock identification criteria and present the optimal performance of this and similar approaches. We compute forecast skill scores for over 10,000 permutations of our shock detection criteria in order to identify the set of threshold values that yield optimal forecast skill scores. We then compare our results to previous automatic shock detection algorithms using a standard data set, and our optimized algorithm shows improvements in the reliability of automated shock detection.
A generalized reusable guidance algorithm for optimal aerobraking
NASA Technical Reports Server (NTRS)
Dukeman, G. A.
1992-01-01
A practical real-time guidance algorithm was developed for guiding aerobraking vehicles in such a way that the maximum heating rate, the maximum structural loads, and the post-aeropass delta-V requirements (for post-aeropass orbit insertion) are all minimized. The algorithm is general and reusable in the sense that a minimum of assumptions are made, thus minimizing the number of gains and mission-dependent parameters that must be laboriously determined prior to a particular mission. A particularly interesting feature is that inplane guidance performance is tuned by simply adjusting one mission-dependent parameter, the bank margin; similarly, the out-of-plane guidance performance is turned by simply adjusting a plane controller time constant. Other objectives in the algorithm development are simplicity, efficiency, and ease of use. The algorithm is developed for, but not necessarily restricted to, a single pass mission and a trimmed vehicle with a bank angle modulation as the method of trajectory control. Guidance performance is demonstrated via results obtained using this algorithm integrated into an aerobraking test-bed program. Comparisons are made with numerical results from a version of the aerobraking guidance algorithm that was to be flown onboard NASA's aeroassist flight experiment (AFE) vehicle. Promising results are obtained with a minimum of development effort.
Hardy Uncertainty Principle, Convexity and Parabolic Evolutions
NASA Astrophysics Data System (ADS)
Escauriaza, L.; Kenig, C. E.; Ponce, G.; Vega, L.
2015-11-01
We give a new proof of the L 2 version of Hardy's uncertainty principle based on calculus and on its dynamical version for the heat equation. The reasonings rely on new log-convexity properties and the derivation of optimal Gaussian decay bounds for solutions to the heat equation with Gaussian decay at a future time.We extend the result to heat equations with lower order variable coefficient.
NASA Astrophysics Data System (ADS)
Gaiser, Kyle; Liou, Meng-Sing
2007-10-01
A genetic algorithm coupled with computational fluid dynamic software is used to optimize the configuration of an engine inlet at a supersonic speed. The optimization program is written to calculate the pressure recovery of many varying bleed schedules throughout the inlet walls. The goal is to find the best combination of the bleed holes' locations, diameters, and flow rates such that a high pressure recovery is maintained. Parallel computing, by means of a NASA supercomputer, is used to run the algorithm efficiently. This is the first time a genetic algorithm has been applied to inlet bleed design. A test function is used to evaluate and debug the optimization algorithm. The genetic algorithm and its associated programs show potential for use in developing more efficient bleed schedules in a hypersonic engine.
NASA Astrophysics Data System (ADS)
Kim, Nam-Geun; Park, Youngsu; Kim, Jong-Wook; Kim, Eunsu; Kim, Sang Woo
In this paper, we present a recently developed pattern search method called Genetic Pattern Search algorithm (GPSA) for the global optimization of cost function subject to simple bounds. GPSA is a combined global optimization method using genetic algorithm (GA) and Digital Pattern Search (DPS) method, which has the digital structure represented by binary strings and guarantees convergence to stationary points from arbitrary starting points. The performance of GPSA is validated through extensive numerical experiments on a number of well known functions and on robot walking application. The optimization results confirm that GPSA is a robust and efficient global optimization method.
Maximally dense packings of two-dimensional convex and concave noncircular particles.
Atkinson, Steven; Jiao, Yang; Torquato, Salvatore
2012-09-01
Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London) 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space R(d). While the original implementation was designed to study spheres and convex polyhedra in d?3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and "moonlike" shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures. PMID:23030907
On algorithmic optimization of histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafa? D.; Czarski, Tomasz; Kolasinski, Piotr; Po?niak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article concerns optimization methods for data analysis for the X-ray GEM detector system. The offline analysis of collected samples was optimized for MATLAB computations. Compiled functions in C language were used with MEX library. Significant speedup was received for both ordering-preprocessing and for histogramming of samples. Utilized techniques with obtained results are presented.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
A near optimal guidance algorithm for aero-assisted orbit transfer
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Bae, Gyoung H.
1988-01-01
The paper presents a near optimal guidance algorithm for aero-assited orbit plane change, based on minimizing the energy loss during the atmospheric portion of the maneuver. The guidance algorithm makes use of recent results obtained from energy state approximations and singular perturbation analysis of optimal heading change for a hypersonic gliding vehicle. This earlier work ignored the terminal constraint on altitude needed to insure that the vehicle exits that atmosphere. Thus, the resulting guidance algorithm was only appropriate for maneuvering reentry vehicle guidance. In the context of singular perturbation theory, a constraint on final altitude gives rise to a difficult terminal boundary layer problem, which cannot be solved in closed form. This paper will demonstrate the near optimality of a predictive/corrective guidance algorithm for the terminal maneuver. Comparisons are made to numerically optimized trajectories for a range or orbit plane angles.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method
NASA Astrophysics Data System (ADS)
Quan, Ya-Min; Wang, Qing-wei; Liu, Da-Yong; Yu, Xiang-Long; Zou, Liang-Jian
2015-06-01
We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hund's coupling terms on metal-insulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.
An adaptive Cauchy differential evolution algorithm for global numerical optimization.
Choi, Tae Jong; Ahn, Chang Wook; An, Jinung
2013-01-01
Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445
Do greedy assortativity optimization algorithms produce good results?
NASA Astrophysics Data System (ADS)
Winterbach, W.; de Ridder, D.; Wang, H. J.; Reinders, M.; Van Mieghem, P.
2012-05-01
We consider algorithms for generating networks that are extremal with respect to degree assortativity. Networks with maximized and minimized assortativities have been studied by other authors. In these cases, networks are rewired whilst maintaining their degree vectors. Although rewiring can be used to create networks with high or low assortativities, it is not known how close the results are to the true maximum or minimum assortativities achievable by networks with the same degree vectors. We introduce the first algorithm for computing a network with maximal or minimal assortativity on a given vector of valid node degrees. We compare the assortativity metrics of networks obtained by this algorithm to assortativity metrics of networks obtained by a greedy assortativity-maximization algorithm. The algorithms are applied to Erd?s-Rényi networks, Barabàsi-Albert and a sample of real-world networks. We find that the number of rewirings considered by the greedy approach must scale with the number of links in order to ensure a good approximation.
An Adaptive Cauchy Differential Evolution Algorithm for Global Numerical Optimization
Choi, Tae Jong; Ahn, Chang Wook; An, Jinung
2013-01-01
Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445
Heterogeneous Sensor Networks with convex constraints
NASA Astrophysics Data System (ADS)
Rogers, U.; Chen, Hao
A wireless sensor network design is always subject to constraints. There are constraints on bandwidth, cost, detection robustness, and even false alarm rates to name but a few. This paper studies the design of Heterogeneous Sensor Networks (HSN) performing distributed binary hypothesis testing subject to convex constraints on the total number and types of sensors available. The relationship between real world engineering constraints and a resultant convex set or solution space will be highlighted. Under these convex sets, a theorem will be presented that defines when a homogenous sensor network will have better performance than a HSN. This theorem depends on stationary statistics, which is not the case for most problems of interest. These challenges are explored in detail for a parallel distributed HSN using the Kullback-Leibler (K-L) information number (K-L Divergence) in conjunction with the Chernoff-Stein Lemma. This method allows sensor counts to be optimized across a finite set of hypothesis or events, enabling a robust hypothesis testing solution similar to problems in traditional multiple hypothesis testing or classification problem. This paper also compares and contrast the detection performance of the standard parallel distributed HSN topology and a modified binary relay tree topology that is similar to clustering methods, where the final fusion is done using a logical OR rule. Finally, the asymptotic performance between these two topologies is studied, including the performance relative to optimal bounds. Ultimately providing a methodology to broadly analyze and optimize HSN design.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Li, Xinyang
2011-04-01
Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.
Algorithm to optimize transient hot-wire thermal property measurement.
Bran-Anleu, Gabriela; Lavine, Adrienne S; Wirz, Richard E; Kavehpour, H Pirouz
2014-04-01
The transient hot-wire method has been widely used to measure the thermal conductivity of fluids. The ideal working equation is based on the solution of the transient heat conduction equation for an infinite linear heat source assuming no natural convection or thermal end effects. In practice, the assumptions inherent in the model are only valid for a portion of the measurement time. In this study, an algorithm was developed to automatically select the proper data range from a transient hot-wire experiment. Numerical simulations of the experiment were used in order to validate the algorithm. The experimental results show that the developed algorithm can be used to improve the accuracy of thermal conductivity measurements. PMID:24784657
Piloted simulation of an algorithm for onboard control of time-optimal intercept
NASA Technical Reports Server (NTRS)
Price, D. B.; Calise, A. J.; Moerder, D. D.
1985-01-01
A piloted simulation of algorithms for onboard computation of trajectories for time-optimal intercept of a moving target by an F-8 aircraft is described. The algorithms, use singular perturbation techniques, generate commands in the cockpit. By centering the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. Example simulations are shown and statistical data on the pilot's performance when presented with different display and computation modes are described.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1995-01-01
This paper discusses certain connections between nonlinear programming algorithms and the formulation of optimization problems for systems governed by state constraints. The major points of this paper are the detailed calculation of the sensitivities associated with different formulations of optimization problems and the identification of some useful relationships between different formulations. These relationships have practical consequences; if one uses a reduced basis nonlinear programming algorithm, then the implementations for the different formulations need only differ in a single step.
Stochastic optimization of a cold atom experiment using a genetic algorithm
Rohringer, W.; Buecker, R.; Manz, S.; Betz, T.; Koller, Ch.; Goebel, M.; Perrin, A.; Schmiedmayer, J.; Schumm, T.
2008-12-29
We employ an evolutionary algorithm to automatically optimize different stages of a cold atom experiment without human intervention. This approach closes the loop between computer based experimental control systems and automatic real time analysis and can be applied to a wide range of experimental situations. The genetic algorithm quickly and reliably converges to the most performing parameter set independent of the starting population. Especially in many-dimensional or connected parameter spaces, the automatic optimization outperforms a manual search.
Convex-profile Inversion of Asteroid Lightcurves
NASA Technical Reports Server (NTRS)
Ostro, S. J.; Connelly, R.
1985-01-01
A lightcurve inversion method that yields a two-dimensional convex profile is introduced. The number of parameters that characterize the profile is limited only by the number of Fourier harmonics used to represent the parent lightcurve. The implementation of the method is outlined by a recursive quadratic programming algorithm, and its application to photoelectric lightcurves and radar measurements is discussed. Special properties of the lightcurves of geometrically scattering ellipsoids are pointed out, and those properties are used to test the inversion method and obtained a criterion for judging whether any lightcurve could actually be due to such an object. Convex profiles for several asteroids are shown, and the method's validity is discussed from a physical as well as purely statistical point of view.
Hybrid ant colony-genetic algorithm (GAAPI) for global continuous optimization.
Ciornei, Irina; Kyriakides, Elias
2012-02-01
Many real-life optimization problems often face an increased rank of nonsmoothness (many local minima) which could prevent a search algorithm from moving toward the global solution. Evolution-based algorithms try to deal with this issue. The algorithm proposed in this paper is called GAAPI and is a hybridization between two optimization techniques: a special class of ant colony optimization for continuous domains entitled API and a genetic algorithm (GA). The algorithm adopts the downhill behavior of API (a key characteristic of optimization algorithms) and the good spreading in the solution space of the GA. A probabilistic approach and an empirical comparison study are presented to prove the convergence of the proposed method in solving different classes of complex global continuous optimization problems. Numerical results are reported and compared to the existing results in the literature to validate the feasibility and the effectiveness of the proposed method. The proposed algorithm is shown to be effective and efficient for most of the test functions. PMID:21896393
Synthesis of optimal digital shapers with arbitrary noise using a genetic algorithm
NASA Astrophysics Data System (ADS)
RegadÃo, Alberto; SÃ¡nchez-Prieto, SebastiÃ¡n; Tabero, JesÃºs; GonzÃ¡lez-CastaÃ±o, Diego M.
2015-09-01
This paper presents structure, design and implementation of a novel technique for determining the optimal shaping, in time-domain, for spectrometers by means of a Genetic Algorithm (GA) specifically designed for this purpose. The proposed algorithm is able to adjust automatically the coefficients for shaping an input signal. Results of this experiment have been compared to a previous simulated annealing algorithm. Finally, its performance and capabilities were tested using simulation data and a real particle detector, as a scintillator.
ERIC Educational Resources Information Center
Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.
2008-01-01
This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…
ERIC Educational Resources Information Center
Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.
2008-01-01
This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm andâ€¦
Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel
2006-12-15
Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.
Lahanas, M; Baltas, D; Zamboglou, N
1999-09-01
In conventional dose optimization algorithms, in brachytherapy, multiple objectives are expressed in terms of an aggregating function which combines individual objective values into a single utility value, making the problem single objective, prior to optimization. A multiobjective genetic algorithm (MOGA) was developed for dose optimization based on an a posteriori approach, leaving the decision-making process to a planner and offering a representative trade-off surface of the various objectives. The MOGA provides a flexible search engine which provides the maximum of information for a decision maker. Tests performed with various treatment plans in brachytherapy have shown that MOGA gives solutions which are superior to those of traditional dose optimization algorithms. Objectives were proposed in terms of the COIN distribution and differential volume histograms, taking into account patient anatomy in the optimization process. PMID:10505880
Optimization of genomic selection training populations with a genetic algorithm
Technology Transfer Automated Retrieval System (TEKTRAN)
In this article, we derive a computationally efficient statistic to measure the reliability of estimates of genetic breeding values for a fixed set of genotypes based on a given training set of genotypes and phenotypes. We adopt a genetic algorithm scheme to find a training set of certain size from ...
Optimal file-bundle caching algorithms for data-grids
Otoo, Ekow; Rotem, Doron; Romosan, Alexandru
2004-04-24
The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.
Applying Genetic Algorithms To Query Optimization in Document Retrieval.
ERIC Educational Resources Information Center
Horng, Jorng-Tzong; Yeh, Ching-Chang
2000-01-01
Proposes a novel approach to automatically retrieve keywords and then uses genetic algorithms to adapt the keyword weights. Discusses Chinese text retrieval, term frequency rating formulas, vector space models, bigrams, the PAT-tree structure for information retrieval, query vectors, and relevance feedback. (Author/LRW)
Optimizing core-shell nanoparticle catalysts with a genetic algorithm
NASA Astrophysics Data System (ADS)
Froemming, Nathan S.; Henkelman, Graeme
2009-12-01
A genetic algorithm is used with density functional theory to investigate the catalytic properties of 38- and 79-atom bimetallic core-shell nanoparticles for the oxygen reduction reaction. Each particle is represented by a two-gene chromosome that identifies its core and shell metals. The fitness of each particle is specified by how close the d-band level of the shell is to that of the Pt(111) surface, a catalyst known to be effective for oxygen reduction. The genetic algorithm starts by creating an initial population of random core-shell particles. The fittest particles are then bred and mutated to replace the least-fit particles in the population and form successive generations. The genetic algorithm iteratively refines the population of candidate catalysts more efficiently than Monte Carlo or random sampling, and we demonstrate how the average energy of the surface d-band can be tuned to that of Pt(111) by varying the core and shell metals. The binding of oxygen is a more direct measure of catalytic activity and is used to further investigate the fittest particles found by the genetic algorithm. The oxygen binding energy is found to vary linearly with the d-band level for particles with the same shell metal, but there is considerable variation in the trend across different shells. Several particles with oxygen binding energies similar to Pt(111) have already been investigated experimentally and found to be active for oxygen reduction. In this work, many other candidates are identified.
Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang
2014-01-01
Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow. PMID:24963506
Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.
2014-06-15
Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment planning.
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem
Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing
2015-01-01
Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171
An Efficient Conical Area Evolutionary Algorithm for Bi-objective Optimization
NASA Astrophysics Data System (ADS)
Ying, Weiqin; Xu, Xing; Feng, Yuxiang; Wu, Yu
A conical area evolutionary algorithm (CAEA) is presented to further improve computational efficiencies of evolutionary algorithms for bi-objective optimization. CAEA partitions the objective space into a number of conical subregions and then solves a scalar subproblem in each subregion that uses a conical area indicator as its scalar objective. The local Pareto optimality of the solution with the minimal conical area in each subregion is proved. Experimental results on bi-objective problems have shown that CAEA offers a significantly higher computational efficiency than the multi-objective evolutionary algorithm based on decomposition (MOEA/D) while CAEA competes well with MOEA/D in terms of solution quality.
Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.
Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing
2015-01-01
Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171
NASA Astrophysics Data System (ADS)
Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng
2016-02-01
It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.
Active Batch Selection via Convex Relaxations with Guaranteed Solution Bounds.
Chakraborty, Shayok; Balasubramanian, Vineeth; Sun, Qian; Panchanathan, Sethuraman; Ye, Jieping
2015-10-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar instances for manual annotation. More recently, there have been attempts towards a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. In this paper, we propose two novel batch mode active learning (BMAL) algorithms: BatchRank and BatchRand. We first formulate the batch selection task as an NP-hard optimization problem; we then propose two convex relaxations, one based on linear programming and the other based on semi-definite programming to solve the batch selection problem. Finally, a deterministic bound is derived on the solution quality for the first relaxation and a probabilistic bound for the second. To the best of our knowledge, this is the first research effort to derive mathematical guarantees on the solution quality of the BMAL problem. Our extensive empirical studies on 15 binary, multi-class and multi-label challenging datasets corroborate that the proposed algorithms perform at par with the state-of-the-art techniques, deliver high quality solutions and are robust to real-world issues like label noise and class imbalance. PMID:26353181
Inverse transport calculations in optical imaging with subspace optimization algorithms
Ding, Tian Ren, Kui
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
FEM Optimization of Spin Forming Using a Fuzzy Control Algorithm
NASA Astrophysics Data System (ADS)
Yoshihara, S.; Ray, P.; MacDonald, B. J.; Koyama, H.; Kawahara, M.
2004-06-01
Finite element (FE) simulation of the manufacturing of a conical nosing such as a pressure vessel from circular tubes, using the spin forming method, was performed on the commercially available software package, ANSYS/LS-DYNA3D. The finite element method (FEM) provides a powerful tool for evaluating the potential to form the pressure vessel with proposed modifications to the process. The use of fuzzy logic inference as a control system to achieve the designed shape of the pressure vessel was investigated using the FEM. The path of the roller as a process parameter was decided by the fuzzy inference control algorithm from information of the result of deformation of each element respectively. The fuzzy control algorithm investigated was validated from the results of the production process time and the deformed shape using FE simulation.
COStar: a D-star Lite-based dynamic search algorithm for codon optimization.
Liu, Xiaowu; Deng, Riqiang; Wang, Jinwen; Wang, Xunzhang
2014-03-01
Codon optimized genes have two major advantages: they simplify de novo gene synthesis and increase the expression level in target hosts. Often they achieve this by altering codon usage in a given gene. Codon optimization is complex because it usually needs to achieve multiple opposing goals. In practice, finding an optimal sequence from the massive number of possible combinations of synonymous codons that can code for the same amino acid sequence is a challenging task. In this article, we introduce COStar, a D-star Lite-based dynamic search algorithm for codon optimization. The algorithm first maps the codon optimization problem into a weighted directed acyclic graph using a sliding window approach. Then, the D-star Lite algorithm is used to compute the shortest path from the start site to the target site in the resulting graph. Optimizing a gene is thus converted to a search in real-time for a shortest path in a generated graph. Using in silico experiments, the performance of the algorithm was shown by optimizing the different genes including the human genome. The results suggest that COStar is a promising codon optimization tool for de novo gene synthesis and heterologous gene expression. PMID:24316385
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
Optimization and application of Retinex algorithm in aerial image processing
NASA Astrophysics Data System (ADS)
Sun, Bo; He, Jun; Li, Hongyu
2008-04-01
In this paper, we provide a segmentation based Retinex for improving the visual quality of aerial images obtained under complex weather conditions. With the method, an aerial image will be segmented into different regions, and then an adaptive Gaussian based on the segmentations will be used to process it. The method addresses the problems existing in previously developed Retinex algorithms, such as halo artifacts and graying-out artifacts. The experimental result also shows evidence of its better effect.
NASA Astrophysics Data System (ADS)
Ivanova, Natalia; Pedersen, Leif T.; Lavergne, Thomas; Tonboe, Rasmus T.; Saldo, Roberto; Mäkynen, Marko; Heygster, Georg; Rösel, Anja; Kern, Stefan; Dybkjær, Gorm; Sørensen, Atle; Brucker, Ludovic; Shokr, Mohammed; Korosov, Anton; Hansen, Morten W.
2015-04-01
Sea ice concentration (SIC) has been derived globally from satellite passive microwave observations since the 1970s by a multitude of algorithms. However, existing datasets and algorithms, although agreeing in the large-scale picture, differ substantially in the details and have disadvantages in summer and fall due to presence of melt ponds and thin ice. There is thus a need for understanding of the causes for the differences and identifying the most suitable method to retrieve SIC. Therefore, during the ESA Climate Change Initiative effort 30 algorithms have been implemented, inter-compared and validated by a standardized reference dataset. The algorithms were evaluated over low and high sea ice concentrations and thin ice. Based on the findings, an optimal approach to retrieve sea ice concentration globally for climate purposes was suggested and validated. The algorithm was implemented with atmospheric correction and dynamical tie points in order to produce the final sea ice concentration dataset with per-pixel uncertainties. The issue of melt ponds was addressed in particular because they are interpreted as open water by the algorithms and thus SIC can be underestimated by up to 40%. To improve our understanding of this issue, melt-pond signatures in AMSR2 images were investigated based on their physical properties with help of observations of melt pond fraction from optical (MODIS and MERIS) and active microwave (SAR) satellite measurements.
Transport path optimization algorithm based on fuzzy integrated weights
NASA Astrophysics Data System (ADS)
Hou, Yuan-Da; Xu, Xiao-Hao
2014-11-01
Natural disasters cause significant damage to roads, making route selection a complicated logistical problem. To overcome this complexity, we present a method of using a trapezoidal fuzzy number to select the optimal transport path. Using the given trapezoidal fuzzy edge coefficients, we calculate a fuzzy integrated matrix, and incorporate the fuzzy multi-weights into fuzzy integrated weights. The optimal path is determined by taking two sets of vertices and transforming undiscovered vertices into discoverable ones. Our experimental results show that the model is highly accurate, and requires only a few measurement data to confirm the optimal path. The model provides an effective, feasible, and convenient method to obtain weights for different road sections, and can be applied to road planning in intelligent transportation systems.
Conceptual optimization using genetic algorithms for tube in tube structures
PÃ¢rv, Bianca Roxana; Hulea, Radu; Mojolic, Cristian
2015-03-10
The purpose of this article is to optimize the tube in tube structural systems for tall buildings under the horizontal wind loads. It is well-known that the horizontal wind loads is the main criteria when choosing the structural system, the types and the dimensions of structural elements in the majority of tall buildings. Thus, the structural response of tall buildings under the horizontal wind loads will be analyzed for 40 story buildings and a total height of 120 meters; the horizontal dimensions will be 30m Ã— 30m for the first two optimization problems and 15m Ã— 15m for the third. The optimization problems will have the following as objective function the cross section area, as restrictions the displacement of the building< the admissible displacement (H/500), and as variables the cross section dimensions of the structural elements.
Optimal algorithm for fluorescence suppression of modulated Raman spectroscopy.
Mazilu, Michael; De Luca, Anna Chiara; Riches, Andrew; Herrington, C Simon; Dholakia, Kishan
2010-05-24
Raman spectroscopy permits probing of the molecular and chemical properties of the analyzed sample. However, its applicability has been seriously limited to specific applications by the presence of a strong fluorescence background. In our recent paper [Anal. Chem. 82, 738 (2010)], we reported a new modulation method for separating Raman scattering from fluorescence. By continuously changing the excitation wavelength, we demonstrated that it is possible to continuously shift the Raman peaks while the fluorescence background remains essentially constant. In this way, our method allows separation of the modulated Raman peaks from the static fluorescence background with important advantages when compared to previous work using only two [Appl. Spectrosc. 46, 707 (1992)] or a few shifted excitation wavelengths [Opt. Express 16, 10975 (2008)]. The purpose of the present work is to demonstrate a significant improvement of the efficacy of the modulated method by using different processing algorithms. The merits of each algorithm (Standard Deviation analysis, Fourier Filtering, Least-Squares fitting and Principal Component Analysis) are discussed and the dependence of the modulated Raman signal on several parameters, such as the amplitude and the modulation rate of the Raman excitation wavelength, is analyzed. The results of both simulation and experimental data demonstrate that Principal Component Analysis is the best processing algorithm. It improves the signal-to-noise ratio in the treated Raman spectra, reducing required acquisition times. Additionally, this approach does not require any synchronization procedure, reduces user intervention and renders it suitable for real-time applications. PMID:20588999
Pixel-based ant colony algorithm for source mask optimization
NASA Astrophysics Data System (ADS)
Kuo, Hung-Fei; Wu, Wei-Chen; Li, Frederick
2015-03-01
Source mask optimization (SMO) was considered to be one of the key resolution enhancement techniques for node technology below 20 nm prior to the availability of extreme-ultraviolet tools. SMO has been shown to enlarge the process margins for the critical layer in SRAM and memory cells. In this study, a new illumination shape optimization approach was developed on the basis of the ant colony optimization (ACO) principle. The use of this heuristic pixel-based ACO method in the SMO process provides an advantage over the extant SMO method because of the gradient of the cost function associated with the rapid and stable searching capability of the proposed method. This study was conducted to provide lithographic engineers with references for the quick determination of the optimal illumination shape for complex mask patterns. The test pattern used in this study was a contact layer for SRAM design, with a critical dimension and a minimum pitch of 55 and 110 nm, respectively. The optimized freeform source shape obtained using the ACO method was numerically verified by performing an aerial image investigation, and the result showed that the optimized freeform source shape generated an aerial image profile different from the nominal image profile and with an overall error rate of 9.64%. Furthermore, the overall average critical shape difference was determined to be 1.41, which was lower than that for the other off-axis illumination exposure. The process window results showed an improvement in exposure latitude (EL) and depth of focus (DOF) for the ACO-based freeform source shape compared with those of the Quasar source shape. The maximum EL of the ACO-based freeform source shape reached 7.4% and the DOF was 56 nm at an EL of 5%.
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Yanheng; Li, Bin; Sun, Geng
2015-10-01
Identifying community structures in static network misses the opportunity to capture the evolutionary patterns. So community detection in dynamic network has attracted many researchers. In this paper, a multiobjective biogeography based optimization algorithm with decomposition (MBBOD) is proposed to solve community detection problem in dynamic networks. In the proposed algorithm, the decomposition mechanism is adopted to optimize two evaluation objectives named modularity and normalized mutual information simultaneously, which measure the quality of the community partitions and temporal cost respectively. A novel sorting strategy for multiobjective biogeography based optimization is presented for comparing quality of habitats to get species counts. In addition, problem-specific migration and mutation model are introduced to improve the effectiveness of the new algorithm. Experimental results both on synthetic and real networks demonstrate that our algorithm is effective and promising, and it can detect communities more accurately in dynamic networks compared with DYNMOGA and FaceNet.
Using genetic algorithms to search for an optimal portfolio strategy and test market efficiency
NASA Astrophysics Data System (ADS)
Xi, Haowen; Mandere, Edward
2008-03-01
In this numerical experiment we used genetic algorithms to search for an optimal portfolio investment strategy. The algorithm involves having a ``manager'' who divides his capital among various ``experts'' each of whom has a simple fixed investment strategy. The expert strategies act like population of genes which experiencing selection, mutation and crossover during evolution process. The genetic algorithm was run on actual portfolio with stock data (DowJones 30 stocks). We found that the genetic algorithm overwhelmingly selected optimal strategy that closely resembles a simple buy and hold portfolio, that is, evenly distribute the capital among all stocks. This study shows that market is very efficient, and one possible practical way to gauge market efficiency is to measure the difference between an optimal portfolio return and a simple buy and hold portfolio return.
Use of genetic algorithms for learning and design of optimal fuzzy trackers
NASA Astrophysics Data System (ADS)
Hwang, Wen-Ruey; Thompson, Wiley E.
1995-07-01
A methodology for combining genetic algorithms (GA) and fuzzy algorithms for learning and design of optimal fuzzy trackers is presented. With the aid of genetic algorithms, optimal rules of fuzzy logic controllers and membership functions can be designed without human operator's experience and/or control engineer's knowledge. The approach presented here involves searching the decoded parameters of the membership functions and finding the optimal control rules based upon a fitness value which is defined in terms of a performance criterion. Two applications are presented: the first application deals with a GA that adjusts the fuzzy tracker at run-time on the basis of performance indices, and the second application deals with a Model Reference Adaptive Algorithm which is based on a crisp model of the closed loop system. The GA changes the parameters of the fuzzy tracker and the fuzzy membership functions in such a way that the closed loop system behaves like the reference model.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
NASA Astrophysics Data System (ADS)
Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei
2007-12-01
A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
NASA Astrophysics Data System (ADS)
Guerrero, RubÃ©n D.; Arango, Carlos A.; Reyes, AndrÃ©s
2015-09-01
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding the interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-01-01
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA. PMID:26782395
Optimization of a Cell Counting Algorithm for Mobile Point-of-Care Testing Platforms
Ahn, DaeHan; Kim, Nam Sung; Moon, SangJun; Park, Taejoon; Son, Sang Hyuk
2014-01-01
In a point-of-care (POC) setting, it is critically important to reliably count the number of specific cells in a blood sample. Software-based cell counting, which is far faster than manual counting, while much cheaper than hardware-based counting, has emerged as an attractive solution potentially applicable to mobile POC testing. However, the existing software-based algorithm based on the normalized cross-correlation (NCC) method is too time- and, thus, energy-consuming to be deployed for battery-powered mobile POC testing platforms. In this paper, we identify inefficiencies in the NCC-based algorithm and propose two synergistic optimization techniques that can considerably reduce the runtime and, thus, energy consumption of the original algorithm with negligible impact on counting accuracy. We demonstrate that an Android™ smart phone running the optimized algorithm consumes 11.5× less runtime than the original algorithm. PMID:25195851
NASA Astrophysics Data System (ADS)
Goswami, D.; Chakraborty, S.
2014-11-01
Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
Optimal placement of tuning masses on truss structures by genetic algorithms
NASA Technical Reports Server (NTRS)
Ponslet, Eric; Haftka, Raphael T.; Cudney, Harley H.
1993-01-01
Optimal placement of tuning masses, actuators and other peripherals on large space structures is a combinatorial optimization problem. This paper surveys several techniques for solving this problem. The genetic algorithm approach to the solution of the placement problem is described in detail. An example of minimizing the difference between the two lowest frequencies of a laboratory truss by adding tuning masses is used for demonstrating some of the advantages of genetic algorithms. The relative efficiencies of different codings are compared using the results of a large number of optimization runs.
NASA Astrophysics Data System (ADS)
Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen
The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.
Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian
2011-01-13
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251
A preference-based evolutionary algorithm for multi-objective optimization.
Thiele, Lothar; Miettinen, Kaisa; Korhonen, Pekka J; Molina, Julian
2009-01-01
In this paper, we discuss the idea of incorporating preference information into evolutionary multi-objective optimization and propose a preference-based evolutionary approach that can be used as an integral part of an interactive algorithm. One algorithm is proposed in the paper. At each iteration, the decision maker is asked to give preference information in terms of his or her reference point consisting of desirable aspiration levels for objective functions. The information is used in an evolutionary algorithm to generate a new population by combining the fitness function and an achievement scalarizing function. In multi-objective optimization, achievement scalarizing functions are widely used to project a given reference point into the Pareto optimal set. In our approach, the next population is thus more concentrated in the area where more preferred alternatives are assumed to lie and the whole Pareto optimal set does not have to be generated with equal accuracy. The approach is demonstrated by numerical examples. PMID:19708774
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
A Modified Differential Evolution Algorithm with Cauchy Mutation for Global Optimization
NASA Astrophysics Data System (ADS)
Ali, Musrrat; Pant, Millie; Singh, Ved Pal
Differential Evolution (DE) is a powerful yet simple evolutionary algorithm for optimization of real valued, multi modal functions. DE is generally considered as a reliable, accurate and robust optimization technique. However, the algorithm suffers from premature convergence, slow convergence rate and large computational time for optimizing the computationally expensive objective functions. Therefore, an attempt to speed up DE is considered necessary. This research introduces a modified differential evolution (MDE), a modification to DE that enhances the convergence rate without compromising with the solution quality. In Modified differential evolution (MDE) algorithm, if an individual fails in continuation to improve its performance to a specified number of times then new point is generated using Cauchy mutation. MDE on a test bed of functions is compared with original DE. It is found that MDE requires less computational effort to locate global optimal solution.
An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm
NASA Astrophysics Data System (ADS)
Wu, Chenhan; Meng, Lingkui; Deng, Shijun
2005-07-01
The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights ?i , the change of the information factors in the original algorithm would transform from ˜?j to ??j+?i and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.
NASA Astrophysics Data System (ADS)
Oraei Zare, S.; Saghafian, B.; Shamsai, A.; Nazif, S.
2012-01-01
Urban development and affects the quantity and quality of urban floods. Generally, flood management include planning and management activities to reduce the harmful effects of floods on people, environment and economy is in a region. In recent years, a concept called Best Management Practices (BMPs) has been widely used for urban flood control from both quality and quantity aspects. In this paper, three objective functions relating to the quality of runoff (including BOD5 and TSS parameters), the quantity of runoff (including runoff volume produced at each sub-basin) and expenses (including construction and maintenance costs of BMPs) were employed in the optimization algorithm aimed at finding optimal solution MOPSO and NSGAII optimization methods were coupled with the SWMM urban runoff simulation model. In the proposed structure for NSGAII algorithm, a continuous structure and intermediate crossover was used because they perform better for improving the optimization model efficiency. To compare the performance of the two optimization algorithms, a number of statistical indicators were computed for the last generation of solutions. Comparing the pareto solution resulted from each of the optimization algorithms indicated that the NSGAII solutions was more optimal. Moreover, the standard deviation of solutions in the last generation had no significant differences in comparison with MOPSO.
Novel back propagation optimization by Cuckoo Search algorithm.
Yi, Jiao-hong; Xu, Wei-hong; Chen, Yuan-tao
2014-01-01
The traditional Back Propagation (BP) has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS), called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN). Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way. PMID:25028682
Novel Back Propagation Optimization by Cuckoo Search Algorithm
Yi, Jiao-hong; Xu, Wei-hong; Chen, Yuan-tao
2014-01-01
The traditional Back Propagation (BP) has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS), called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN). Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way. PMID:25028682
Optimized encoder design algorithm for joint compression and recognition
NASA Astrophysics Data System (ADS)
Nahm, Jin-Woo; Smith, Mark J. T.
1995-07-01
Sensor data, such as SAR and FLIR images, are commonly transmitted from aircraft or satellites to airborne or ground stations for target detection and recognition processing. ATR algorithms are typically run at remote locations because they are very complex computationally, and require powerful computer resources. Rarely is unlimited channel bandwidth available for transmission. Thus one must also contend with delay-cost-quality tradeoff issues, which are often addressed by compressing data prior to transmission. Overall performance is largely restricted by the computational power of the on-board processor, since this limits the complexity and quality of the compression, which in turn affects the speed of transmission. Given some fixed level of computational power available for compression and transmission on board the aircraft, a useful technological improvement would be to have some level of on-board detection/recognition capability so that immediate action could be taken as appropriate. Toward this end, we introduce a method of joint compression and recognition for potential implementation on sensor-equipped aircraft. The algorithm is formulated to provide a level of immediate classification as a by-product of the compression, which in turn would provide the pilot with potential target information instantly.
Primary chromatic aberration elimination via optimization work with genetic algorithm
NASA Astrophysics Data System (ADS)
Wu, Bo-Wen; Liu, Tung-Kuan; Fang, Yi-Chin; Chou, Jyh-Horng; Tsai, Hsien-Lin; Chang, En-Hao
2008-09-01
Chromatic Aberration plays a part in modern optical systems, especially in digitalized and smart optical systems. Much effort has been devoted to eliminating specific chromatic aberration in order to match the demand for advanced digitalized optical products. Basically, the elimination of axial chromatic and lateral color aberration of an optical lens and system depends on the selection of optical glass. According to reports from glass companies all over the world, the number of various newly developed optical glasses in the market exceeds three hundred. However, due to the complexity of a practical optical system, optical designers have so far had difficulty in finding the right solution to eliminate small axial and lateral chromatic aberration except by the Damped Least Squares (DLS) method, which is limited in so far as the DLS method has not yet managed to find a better optical system configuration. In the present research, genetic algorithms are used to replace traditional DLS so as to eliminate axial and lateral chromatic, by combining the theories of geometric optics in Tessar type lenses and a technique involving Binary/Real Encoding, Multiple Dynamic Crossover and Random Gene Mutation to find a much better configuration for optical glasses. By implementing the algorithms outlined in this paper, satisfactory results can be achieved in eliminating axial and lateral color aberration.
Extremal polynomials and methods of optimization of numerical algorithms
NASA Astrophysics Data System (ADS)
Lebedev, V. I.
2004-10-01
Chebyshëv-Markov-Bernstein-Szegö polynomials C_n(x) extremal on \\lbrack -1,1 \\rbrack with weight functions w(x)=(1+x)^\\alpha(1- x)^\\beta/\\sqrt{S_l(x)} where \\alpha,\\beta=0,\\frac12 and S_l(x)=\\prod_{k=1}^m(1-c_kT_{l_k}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w^2(x)(1-x^2)^{-1/2}. The parameters of optimal Chebyshëv iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshëv filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.
A Novel Hybrid Crossover based Artificial Bee Colony Algorithm for Optimization Problem
NASA Astrophysics Data System (ADS)
Kumar, Sandeep; Kumar Sharma, Vivek; Kumari, Rajani
2013-11-01
Artificial bee colony (ABC) algorithm has proved its importance in solving a number of problems including engineering optimization problems. ABC algorithm is one of the most popular and youngest member of the family of population based nature inspired meta-heuristic swarm intelligence method. ABC has been proved its superiority over some other Nature Inspired Algorithms (NIA) when applied for both benchmark functions and real world problems. The performance of search process of ABC depends on a random value which tries to balance exploration and exploitation phase. In order to increase the performance it is required to balance the exploration of search space and exploitation of optimal solution of the ABC. This paper outlines a new hybrid of ABC algorithm with Genetic Algorithm. The proposed method integrates crossover operation from Genetic Algorithm (GA) with original ABC algorithm. The proposed method is named as Crossover based ABC (CbABC). The CbABC strengthens the exploitation phase of ABC as crossover enhances exploration of search space. The CbABC tested over four standard benchmark functions and a popular continuous optimization problem.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
NASA Astrophysics Data System (ADS)
Xu, Shiyu; Zhang, Zhenxi; Chen, Ying
2014-03-01
Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Preconditioning 2D Integer Data for Fast Convex Hull Computations
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s â‰¤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p Ã— q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) â‰¤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) â‰¤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Chen, Zaigao; Wang, Jianguo; Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi'an, Shaanxi 710024 ; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.