Science.gov

Sample records for annealing optimization algorithm

  1. Simulated annealing algorithm for optimal capital growth

    NASA Astrophysics Data System (ADS)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  2. A deterministic annealing algorithm for a combinatorial optimization problem using replicator equations

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Kazuo; Nishiyama, Takehiro; Tsujita, Katsuyoshi

    2001-02-01

    We have proposed an optimization method for a combinatorial optimization problem using replicator equations. To improve the solution further, a deterministic annealing algorithm may be applied. During the annealing process, bifurcations of equilibrium solutions will occur and affect the performance of the deterministic annealing algorithm. In this paper, the bifurcation structure of the proposed model is analyzed in detail. It is shown that only pitchfork bifurcations occur in the annealing process, and the solution obtained by the annealing is the branch uniquely connected with the uniform solution. It is also shown experimentally that in many cases, this solution corresponds to a good approximate solution of the optimization problem. Based on the results, a deterministic annealing algorithm is proposed and applied to the quadratic assignment problem to verify its performance.

  3. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    SciTech Connect

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-12-15

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.

  4. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  5. Humanoid robot gait optimization: Stretched simulated annealing and genetic algorithm a comparative study

    NASA Astrophysics Data System (ADS)

    Pereira, Ana I.; Lima, José; Costa, Paulo

    2013-10-01

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The presented paper addresses a comparison between two optimization methods, the Stretched Simulated Annealing and the Genetic Algorithm, that runs in an accurate and stable simulation model. Final results show the comparative study and demonstrate that optimization is a valid gait planning technique.

  6. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  7. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm.

    PubMed

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-04-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.

  8. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm

    PubMed Central

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  9. Solution of the optimal plant location and sizing problem using simulated annealing and genetic algorithms

    SciTech Connect

    Rao, R.; Buescher, K.L.; Hanagandi, V.

    1995-12-31

    In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.

  10. [The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].

    PubMed

    Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa

    2003-11-01

    With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology.

  11. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  12. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  13. Structural optimization and segregation behavior of quaternary alloy nanoparticles based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Xin-Ze, Lu; Gui-Fang, Shao; Liang-You, Xu; Tun-Dong, Liu; Yu-Hua, Wen

    2016-05-01

    Alloy nanoparticles exhibit higher catalytic activity than monometallic nanoparticles, and their stable structures are of importance to their applications. We employ the simulated annealing algorithm to systematically explore the stable structure and segregation behavior of tetrahexahedral Pt-Pd-Cu-Au quaternary alloy nanoparticles. Three alloy nanoparticles consisting of 443 atoms, 1417 atoms, and 3285 atoms are considered and compared. The preferred positions of atoms in the nanoparticles are analyzed. The simulation results reveal that Cu and Au atoms tend to occupy the surface, Pt atoms preferentially occupy the middle layers, and Pd atoms tend to segregate to the inner layers. Furthermore, Au atoms present stronger surface segregation than Cu ones. This study provides a fundamental understanding on the structural features and segregation phenomena of multi-metallic nanoparticles. Project supported by the National Natural Science Foundation of China (Grant Nos. 51271156, 11474234, and 61403318) and the Natural Science Foundation of Fujian Province of China (Grant Nos. 2013J01255 and 2013J06002).

  14. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  15. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  16. Design of optimal pump-and-treat strategies for contaminated groundwater remediation using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Kuo, Chin-Hwa; Michel, Anthony N.; Gray, William G.

    The problem of the placement of pumps and the selection of pumping rates are the most important issues in designing contaminated groundwater remediation systems using a pump-and-treat strategy. Three nonlinear optimization formulations are proposed to address these problems. The first problem formulation considers hydraulic constraints and reduces the plume concentration to a specified regulation standard value within a given planning time while minimizing capital cost. The second formulation minimizes residual contaminant in a fixed period under hydraulic contraints only. The third formulation is similar to the second formulation; however, in this formulation the number of pumps is prespecified by using the results from the first formulation. The inclusion of well installation costs in the first problem formulation results in a nonsmooth objective function. For such problems, only local optimum solutions can be expected by the use of conventional nonlinear optimization techniques. In the present paper, the simulated annealing algorithm is used to overcome these difficulties. Specific simulation studies indicate that the method advanced herein is promising and involves acceptable computation times.

  17. A theoretical comparison of evolutionary algorithms and simulated annealing

    SciTech Connect

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  18. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  19. Kriging-approximation simulated annealing algorithm for groundwater modeling

    NASA Astrophysics Data System (ADS)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  20. An improved simulated annealing algorithm for standard cell placement

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1988-01-01

    Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.

  1. Optimal Groundwater Management: 1. Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Dougherty, David E.; Marryott, Robert A.

    1991-10-01

    Simulated annealing is introduced and applied to the optimization of groundwater management problems cast in combinatorial form. This heuristic, probabilistic optimization method seeks minima in analogy with the annealing of solids and is effective on large-scale problems. No continuity requirements are imposed on objective (cost) functions. Constraints may be added to the cost function via penalties, imposed by designation of the solution domain, or imbedded in submodels (e.g., mass balance in aquifer flow simulators) used to evaluate costs. The location of global optima may be theoretically guaranteed, but computational limitations lead to searches for nearly optimal solutions in practice. Like other optimization methods, most of the computational effort is expended in flow and transport simulators. Practical algorithmic guidance that leads to enormous computational savings and sometimes makes simulated annealing competitive with gradient-type optimization methods is provided. The method is illustrated by example applications to idealized problems of groundwater flow and selection of remediation strategy, including optimization with multiple groundwater control technologies. They demonstrate the flexibility of the method and indicate its potential for solving groundwater management problems. The application of simulated annealing to water resources problems is new and its development is immature, so further performance improvements can be expected.

  2. Quantum Annealing for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Spedalieri, Federico M.

    2016-03-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that promise to solve certain combinatorial optimization problems of practical relevance faster than their classical analogues. The applicability of such devices for many theoretical and real-world optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which, in turn, requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here, we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby reduces the number of required couplers and removes the need for minor embedding, greatly reducing the number of required physical qubits. We argue the advantages of the proposed technique and illustrate its effectiveness. We conclude by discussing the experimental feasibility of the suggested method as well as its potential to appreciably reduce the resource requirements for implementing optimization problems on quantum annealers and its significance in the field of quantum computing.

  3. Selection of views to materialize using simulated annealing algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  4. Quantum Annealing for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Spedalieri, Federico

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that could potentially solve certain quadratic unconstrained binary optimization problems faster than their classical analogues. The applicability of such devices for many theoretical and practical optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which in turn requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby removes many of the obstacles associated with the implementation of these. We argue the advantages of the proposed technique and illustrate its effectiveness. We then conclude by discussing the experimental feasibility of the suggested method as well as its potential to boost the encodability of other optimization problems.

  5. Remediation tradeoffs addressed with simulated annealing optimization

    SciTech Connect

    Rogers, L. L., LLNL

    1998-02-01

    Escalation of groundwater remediation costs has encouraged both advances in optimization techniques to balance remediation objectives and economics and development of innovative technologies to expedite source region clean-ups. We present an optimization application building on a pump-and-treat model, yet assuming a prior removal of different portions of the source area to address the evolving management issue of more aggressive source remediation. Separate economic estimates of in-situ thermal remediation are combined with the economic estimates of the subsequent optimal pump-and-treat remediation to observe tradeoff relationships of cost vs. highest remaining contamination levels (hot spot). The simulated annealing algorithm calls the flow and transport model to evaluate the success of a proposed remediation scenario at a U.S.A. Superfund site contaminated with volatile organic compounds (VOCs).

  6. Combined simulated annealing algorithm for the discrete facility location problem.

    PubMed

    Qin, Jin; Ni, Ling-Lin; Shi, Feng

    2012-01-01

    The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  7. Temperature Scaling Law for Quantum Annealing Optimizers

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-01

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  8. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  9. Hybridisations Of Simulated Annealing And Modified Simplex Algorithms On A Path Of Steepest Ascent With Multi-Response For Optimal Parameter Settings Of ACO

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.

    2009-10-01

    Many entrepreneurs face to extreme conditions for instances; costs, quality, sales and services. Moreover, technology has always been intertwined with our demands. Then almost manufacturers or assembling lines adopt it and come out with more complicated process inevitably. At this stage, products and service improvement need to be shifted from competitors with sustainability. So, a simulated process optimisation is an alternative way for solving huge and complex problems. Metaheuristics are sequential processes that perform exploration and exploitation in the solution space aiming to efficiently find near optimal solutions with natural intelligence as a source of inspiration. One of the most well-known metaheuristics is called Ant Colony Optimisation, ACO. This paper is conducted to give an aid in complicatedness of using ACO in terms of its parameters: number of iterations, ants and moves. Proper levels of these parameters are analysed on eight noisy continuous non-linear continuous response surfaces. Considering the solution space in a specified region, some surfaces contain global optimum and multiple local optimums and some are with a curved ridge. ACO parameters are determined through hybridisations of Modified Simplex and Simulated Annealing methods on the path of Steepest Ascent, SAM. SAM was introduced to recommend preferable levels of ACO parameters via statistically significant regression analysis and Taguchi's signal to noise ratio. Other performance achievements include minimax and mean squared error measures. A series of computational experiments using each algorithm were conducted. Experimental results were analysed in terms of mean, design points and best so far solutions. It was found that results obtained from a hybridisation with stochastic procedures of Simulated Annealing method were better than that using Modified Simplex algorithm. However, the average execution time of experimental runs and number of design points using hybridisations were

  10. Simulated annealing algorithm applied in adaptive near field beam shaping

    NASA Astrophysics Data System (ADS)

    Yu, Zhan; Ma, Hao-tong; Du, Shao-jun

    2010-11-01

    Laser beam shaping is required in many applications for improving the efficiency of the laser systems. In this paper, the near field beam shaping based on the combination of simulated annealing algorithm and Zernike polynomials is demonstrated. Considering phase distribution can be represented by the expansion of Zernike polynomials, the problem of searching appropriate phase distribution can be changed into a problem of optimizing a vector made up of Zernike coefficients. The feasibility of this method is validated theoretically by translating the Gaussian beam into square quasi-flattop beam in the near field. Finally, the closed control loop system constituted by phase only liquid crystal spatial light modulator and simulated annealing algorithm is used to prove the validity of the technique. The experiment results show that the system can generate laser beam with desired intensity distributions.

  11. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  12. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  13. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  14. First application of quantum annealing to IMRT beamlet intensity optimization

    NASA Astrophysics Data System (ADS)

    Nazareth, Daryl P.; Spaans, Jason D.

    2015-05-01

    Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT. We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system’s free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster. Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27-38% of the time required by the other two methods. In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3-4 times faster than the other two methods

  15. First application of quantum annealing to IMRT beamlet intensity optimization.

    PubMed

    Nazareth, Daryl P; Spaans, Jason D

    2015-05-21

    Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT. We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system's free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster. Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27-38% of the time required by the other two methods. In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3-4 times faster than the other two methods. This

  16. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  17. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  18. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  19. Optimal placement of excitations and sensors by simulated annealing

    NASA Technical Reports Server (NTRS)

    Salama, Moktar; Bruno, R.; Chen, G.-S.; Garba, J.

    1989-01-01

    The optimal placement of discrete actuators and sensors is posed as a combinatorial optimization problem. Two examples for truss structures were used for illustration; the first dealt with the optimal placement of passive dampers along existing truss members, and the second dealt with the optimal placement of a combination of a set of actuators and a set of sensors. Except for the simplest problems, an exact solution by enumeration involves a very large number of function evaluations, and is therefore computationally intractable. By contrast, the simulated annealing heuristic involves far fewer evaluations and is best suited for the class of problems considered. As an optimization tool, the effectiveness of the algorithm is enhanced by introducing a number of rules that incorporate knowledge about the physical behavior of the problem. Some of the suggested rules are necessarily problem dependent.

  20. Application of Chaotic Simulated Annealing in the Optimization of Task Allocation in a Multiprocessing System

    NASA Astrophysics Data System (ADS)

    Cook, Darcy; Ferens, Ken; Kinsner, Witold

    Simulated Annealing (SA) has shown to be a successful technique in optimization problems. It has been applied to both continuous function optimization problems, and combinatorial optimization problems. There has been some work in modifying the SA algorithm to apply properties of chaotic processes with the goal of reducing the time to converge to an optimal or a good solution. There are several variations of these chaotic simulated annealing (CSA) algorithms. In this paper a new variation of chaotic simulated annealing is proposed and is applied in solving a combinatorial optimization problem in multiprocessor task allocation. The experiments show the CSA algorithms reach a good solution faster than traditional SA algorithms in many cases because of a wider initial solution search.

  1. Research on coal-mine gas monitoring system controlled by annealing simulating algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Mengran; Li, Zhenbi

    2007-12-01

    This paper introduces the principle and schematic diagram of gas monitoring system by means of infrared method. Annealing simulating algorithm is adopted to find the whole optimum solution and the Metroplis criterion is used to make iterative algorithm combination optimization by control parameter decreasing aiming at solving large-scale combination optimization problem. Experiment result obtained by the performing scheme of realizing algorithm training and flow of realizing algorithm training indicates that annealing simulating algorithm applied to identify gas is better than traditional linear local search method. It makes the algorithm iterate to the optimum value rapidly so that the quality of the solution is improved efficiently. The CPU time is shortened and the identifying rate of gas is increased. For the mines with much-gas gushing fatalness the regional danger and disaster advanced forecast can be realized. The reliability of coal-mine safety is improved.

  2. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is

  3. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is

  4. Mean field annealing: a formalism for constructing GNC-like algorithms.

    PubMed

    Bilbro, G L; Snyder, W E; Garnier, S J; Gault, J W

    1992-01-01

    Optimization problems are approached using mean field annealing (MFA), which is a deterministic approximation, using mean field theory and based on Peierls's inequality, to simulated annealing. The MFA mathematics are applied to three different objective function examples. In each case, MFA produces a minimization algorithm that is a type of graduated nonconvexity. When applied to the ;weak-membrane' objective, MFA results in an algorithm qualitatively identical to the published GNC algorithm. One of the examples, MFA applied to a piecewise-constant objective function, is then compared experimentally with the corresponding GNC weak-membrane algorithm. The mathematics of MFA are shown to provide a powerful and general tool for deriving optimization algorithms.

  5. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    SciTech Connect

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.

  6. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  7. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  8. Hybrid Simulated Annealing and Genetic Algorithms for Industrial Production Management Problems

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2009-08-01

    This paper describes the origin and significant contribution on the development of the Hybrid Simulated Annealing and Genetic Algorithms (HSAGA) approach for finding global optimization. HSAGA provide an insight approach to handle in solving complex optimization problems. The method is, the combination of meta-heuristic approaches of Simulated Annealing and novel Genetic Algorithms for solving a non-linear objective function with uncertain technical coefficients in an industrial production management problems. The proposed novel hybrid method is designed to search for global optimal for the non-linear objective function and search for the best feasible solutions of the decision variables. Simulated experiments were carried out rigorously to reflect the advantages of the proposed method. A description of the well developed method and the advanced computational experiment with MATLAB technical tool is presented. An industrial production management optimization problem is solved using HSAGA technique. The results are very much promising.

  9. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  10. Optimal simulated annealing schedules for self similar systems

    NASA Astrophysics Data System (ADS)

    Ergenzinger, K.; Hoffmann, K. H.; Salamon, P.

    1995-06-01

    The successful application of the stochastic optimization method known as simulated annealing can depend very much on the appropriate annealing schedule. While determining optimal schedules for arbitrary complex optimization problems is beyond the current scope, we here determine optimal schedules for a special class of systems with known properties. The state spaces of these special systems have the structure of self similar trees. Using methods of optimal control theory, we are able to predict the optimal schedule analytically for two distinct optimization criteria. These predictions are shown to be in good agreement with numerical results.

  11. Distributed Particle Swarm Optimization and Simulated Annealing for Energy-efficient Coverage in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    The limited energy supply of wireless sensor networks poses a great challenge for the deployment of wireless sensor nodes. In this paper, we focus on energy-efficient coverage with distributed particle swarm optimization and simulated annealing. First, the energy-efficient coverage problem is formulated with sensing coverage and energy consumption models. We consider the network composed of stationary and mobile nodes. Second, coverage and energy metrics are presented to evaluate the coverage rate and energy consumption of a wireless sensor network, where a grid exclusion algorithm extracts the coverage state and Dijkstra's algorithm calculates the lowest cost path for communication. Then, a hybrid algorithm optimizes the energy consumption, in which particle swarm optimization and simulated annealing are combined to find the optimal deployment solution in a distributed manner. Simulated annealing is performed on multiple wireless sensor nodes, results of which are employed to correct the local and global best solution of particle swarm optimization. Simulations of wireless sensor node deployment verify that coverage performance can be guaranteed, energy consumption of communication is conserved after deployment optimization and the optimization performance is boosted by the distributed algorithm. Moreover, it is demonstrated that energy efficiency of wireless sensor networks is enhanced by the proposed optimization algorithm in target tracking applications.

  12. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  13. Rayleigh wave inversion using heat-bath simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng

    2016-11-01

    The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.

  14. Neutronic optimization in high conversion Th-{sup 233}U fuel assembly with simulated annealing

    SciTech Connect

    Kotlyar, D.; Shwageraus, E.

    2012-07-01

    This paper reports on fuel design optimization of a PWR operating in a self sustainable Th-{sup 233}U fuel cycle. Monte Carlo simulated annealing method was used in order to identify the fuel assembly configuration with the most attractive breeding performance. In previous studies, it was shown that breeding may be achieved by employing heterogeneous Seed-Blanket fuel geometry. The arrangement of seed and blanket pins within the assemblies may be determined by varying the designed parameters based on basic reactor physics phenomena which affect breeding. However, the amount of free parameters may still prove to be prohibitively large in order to systematically explore the design space for optimal solution. Therefore, the Monte Carlo annealing algorithm for neutronic optimization is applied in order to identify the most favorable design. The objective of simulated annealing optimization is to find a set of design parameters, which maximizes some given performance function (such as relative period of net breeding) under specified constraints (such as fuel cycle length). The first objective of the study was to demonstrate that the simulated annealing optimization algorithm will lead to the same fuel pins arrangement as was obtained in the previous studies which used only basic physics phenomena as guidance for optimization. In the second part of this work, the simulated annealing method was used to optimize fuel pins arrangement in much larger fuel assembly, where the basic physics intuition does not yield clearly optimal configuration. The simulated annealing method was found to be very efficient in selecting the optimal design in both cases. In the future, this method will be used for optimization of fuel assembly design with larger number of free parameters in order to determine the most favorable trade-off between the breeding performance and core average power density. (authors)

  15. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  16. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  17. Sequential annealing gradient Gamma-Knife radiosurgery optimization

    NASA Astrophysics Data System (ADS)

    Ove, Roger; Popple, Richard

    2003-07-01

    Simulated annealing and gradient methods are commonly employed for inverse planning of radiotherapy delivery schemes. Annealing is effective in finding an approximation of the global solution, suffering from slow late convergence and in some cases poor dose homogeneity. Gradient methods converge well but not necessarily to the global minimum. We explored simulated annealing followed by gradient optimization to improve on either method alone, using radiosurgery as the model system. Simulated annealing and gradient inverse planning programs using the same objective function were adapted for radiosurgical optimization. The objective function chosen is a least-squares dose-matching function, with differential weighting of tissues. A simple test target allowing local minima in the objective function was evaluated. Two hundred trials using the gradient method were done. The gradient method approximated the global solution only 12% of the time, commonly finding a local minimum. The annealing-gradient technique converged to the global minimum in 78 out of 80 trials, more efficiently than annealing alone. Dose homogeneity was improved. In conclusion, sequential annealing-gradient optimization can improve on either method alone. The technique may be extensible to radiotherapy inverse planning in general, with benefit expected for problems characterized by slow gradient method convergence and local minima.

  18. Improving the apatite fission-track annealing algorithm

    NASA Astrophysics Data System (ADS)

    Luijendijk, Elco; Andriessen, Paul; ter Voorde, Marlies; van Balen, Ronald

    2017-04-01

    Low-temperature thermochronology is a key tool to quantifying the thermal history and exhumation of the crust. The interpretation of one of the most widely-used thermochronometers, apatite fission-track analysis, relies on models that relate fission track density to temperature history. These models have been calibrated to fission-track data from the Otway basin, Australia. We discuss geological evidence that the current benchmark dataset is located in a basin in which rocks may have been warmer in the past than previously assumed. We recalibrate the apatite fission-track annealing algorithm to a dataset from Southern Texas with a well-constrained thermal history. We show that current models underestimate the temperature at which fission tracks anneal completely by 19 ˚C to 34 ˚C. Exhumation rates derived from fission-track data have been underestimated; at normal geothermal gradients estimates may have to be revised upward by 500 to 2000 m. The results also have implications for the (U-Th)/He thermochronometer, because radiation damage influences the diffusivity of helium in apatites. The difference in modelled (U-Th)/He ages is approximately 10% for samples that have undergone a long cooling history. We also present a new Python code that can be used for forward or inverse modelling of fission track data using the new annealing algorithm.

  19. An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.

    PubMed

    Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed

    2015-10-01

    Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.

  20. GPU-Accelerated Population Annealing Algorithm: Frustrated Ising Antiferromagnet on the Stacked Triangular Lattice

    NASA Astrophysics Data System (ADS)

    Borovský, Michal; Weigel, Martin; Barash, Lev Yu.; Žukovič, Milan

    2016-02-01

    The population annealing algorithm is a novel approach to study systems with rough free-energy landscapes, such as spin glasses. It combines the power of simulated annealing, Boltzmann weighted differential reproduction and sequential Monte Carlo process to bring the population of replicas to the equilibrium even in the low-temperature region. Moreover, it provides a very good estimate of the free energy. The fact that population annealing algorithm is performed over a large number of replicas with many spin updates, makes it a good candidate for massive parallelism. We chose the GPU programming using a CUDA implementation to create a highly optimized simulation. It has been previously shown for the frustrated Ising antiferromagnet on the stacked triangular lattice with a ferromagnetic interlayer coupling, that standard Markov Chain Monte Carlo simulations fail to equilibrate at low temperatures due to the effect of kinetic freezing of the ferromagnetically ordered chains. We applied the population annealing to study the case with the isotropic intra- and interlayer antiferromagnetic coupling (J2/|J1| = -1). The reached ground states correspond to non-magnetic degenerate states, where chains are antiferromagnetically ordered, but there is no long-range ordering between them, which is analogical with Wannier phase of the 2D triangular Ising antiferromagnet.

  1. Quantum versus simulated annealing in wireless interference network optimization

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  2. Quantum versus simulated annealing in wireless interference network optimization.

    PubMed

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-16

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  3. Quantum versus simulated annealing in wireless interference network optimization

    PubMed Central

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056

  4. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  5. Optimized dynamical decoupling via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory; Lidar, Daniel A.

    2013-11-01

    We utilize genetic algorithms aided by simulated annealing to find optimal dynamical decoupling (DD) sequences for a single-qubit system subjected to a general decoherence model under a variety of control pulse conditions. We focus on the case of sequences with equal pulse intervals and perform the optimization with respect to pulse type and order. In this manner, we obtain robust DD sequences, first in the limit of ideal pulses, then when including pulse imperfections such as finite-pulse duration and qubit rotation (flip-angle) errors. Although our optimization is numerical, we identify a deterministic structure that underlies the top-performing sequences. We use this structure to devise DD sequences which outperform previously designed concatenated DD (CDD) and quadratic DD (QDD) sequences in the presence of pulse errors. We explain our findings using time-dependent perturbation theory and provide a detailed scaling analysis of the optimal sequences.

  6. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  7. An enhanced simulated annealing routing algorithm for semi-diagonal torus network

    NASA Astrophysics Data System (ADS)

    Adzhar, Noraziah; Salleh, Shaharuddin

    2017-09-01

    Multiprocessor is another great technology that helps in advancing human civilization due to high demands for solving complex problems. A multiprocessing system can have a lot of replicated processor-memory pairs (henceforth regard as net) or also called as processing nodes. Each of these nodes is connected to each other through interconnection networks and passes message using a standard message passing mechanism. In this paper, we present a routing algorithm based on enhanced simulated annealing technique to provide the connection between nodes in a semi-diagonal torus (SD-Torus) network. This network is both symmetric and regular; thus, make it very beneficial in the implementation process. The main objective is to maximize the number of established connection between nodes in this SD-Torus network. In order to achieve this objective, each node must be connected in its shortest way as possible. We start our algorithm by designing shortest path algorithm based on Dijkstra’s method. While this algorithm guarantees to find the shortest path for each single net, if it exists, each routed net will form obstacle for later paths. This increases the complexity to route later nets and makes routing longer than optimal, or sometimes impossible to complete. The solution is further refined by re-routing all nets in different orders using simulated annealing method. Through simulation program, our proposed algorithm succeeded in performing complete routing up to 81 nodes with 40 nets in 9×9 SD-Torus network size.

  8. Optimal temperature profiles for annealing of GaAs-crystals

    NASA Astrophysics Data System (ADS)

    Metzger, Michael; Backofen, Rainer

    2000-11-01

    The modelling and optimisation of the thermal post-processing for bulk GaAs crystals is described. The annealing takes place in an approximately axisymmetric tube furnace and improves the crystal quality due to a more homogeneous distribution of defects and dopants. For this purpose the crystal has to be heated up to a certain temperature, as fast as possible for economical reasons. The crucial point is to minimise thermally induced stresses during heating. For this optimisation an algorithm based on a reduced order model (ROM) of the heating process is developed. By the aid of this model-predictive control (MPC) algorithm the required time to achieve the annealing temperature was decreased by 30% while the thermoelastic stress in the crystal is reduced by 10% compared to a standard procedure.

  9. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3

  10. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  11. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  12. Forecasting Nonlinear Chaotic Time Series with Function Expression Method Based on an Improved Genetic-Simulated Annealing Algorithm

    PubMed Central

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011

  13. Use of a simulated annealing algorithm to fit compartmental models with an application to fractal pharmacokinetics.

    PubMed

    Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A

    2007-01-01

    Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models.

  14. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  15. Multilevel Algorithms for Nonlinear Optimization

    DTIC Science & Technology

    1994-06-01

    NASA Contractor Report 194940 ICASE Report No. 94-53 AD-A284 318 * ICASE MULTILEVEL ALGORITHMSDDTIC FOR NONLINEAR OPTIMIZATION ELECTESEP 1 4 1994 F...Association SOperated b MULTILEVEL ALGORITHMS FOR NONLINEAR OPTIMIZATION Natalia Alexandrov Accesion For ICASE C Mail Stop 132C NTIS CRA&ID C TAB 1Q...ABSTRACT Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that

  16. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  17. Optimal estuarine sediment monitoring network design with simulated annealing.

    PubMed

    Nunes, L M; Caeiro, S; Cunha, M C; Ribeiro, L

    2006-02-01

    An objective function based on geostatistical variance reduction, constrained to the reproduction of the probability distribution functions of selected physical and chemical sediment variables, is applied to the selection of the best set of compliance monitoring stations in the Sado river estuary in Portugal. These stations were to be selected from a large set of sampling stations from a prior field campaign. Simulated annealing was chosen to solve the optimisation function model. Both the combinatorial problem structure and the resulting candidate sediment monitoring networks are discussed, and the optimal dimension and spatial distribution are proposed. An optimal network of sixty stations was obtained from an original 153-station sampling campaign.

  18. A deterministic annealing algorithm for approximating a solution of the min-bisection problem.

    PubMed

    Dang, Chuangyin; Ma, Wei; Liang, Jiye

    2009-01-01

    The min-bisection problem is an NP-hard combinatorial optimization problem. In this paper an equivalent linearly constrained continuous optimization problem is formulated and an algorithm is proposed for approximating its solution. The algorithm is derived from the introduction of a logarithmic-cosine barrier function, where the barrier parameter behaves as temperature in an annealing procedure and decreases from a sufficiently large positive number to zero. The algorithm searches for a better solution in a feasible descent direction, which has a desired property that lower and upper bounds are always satisfied automatically if the step length is a number between zero and one. We prove that the algorithm converges to at least a local minimum point of the problem if a local minimum point of the barrier problem is generated for a sequence of descending values of the barrier parameter with a limit of zero. Numerical results show that the algorithm is much more efficient than two of the best existing heuristic methods for the min-bisection problem, Kernighan-Lin method with multiple starting points (MSKL) and multilevel graph partitioning scheme (MLGP).

  19. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  20. Quantum simulated annealing

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio; Somma, Rolando; Barnum, Howard

    2008-03-01

    We develop a quantum algorithm to solve combinatorial optimization problems through quantum simulation of a classical annealing process. Our algorithm combines techniques from quantum walks and quantum phase estimation, and can be viewed as the quantum analogue of the discrete-time Markov Chain Monte Carlo implementation of classical simulated annealing.

  1. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  2. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  3. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  4. Optimization algorithm performance in determining optimal controls in human movement analyses.

    PubMed

    Neptune, R R

    1999-04-01

    The objective of this study was to evaluate the performance of different multivariate optimization algorithms by solving a "tracking" problem using a forward dynamic model of pedaling. The tracking problem was defined as solving for the muscle controls (muscle stimulation onset, offset, and magnitude) that minimized the error between experimentally collected kinetic and kinematic data and the simulation results of pedaling at 90 rpm and 250 W. Three different algorithms were evaluated: a downhill simplex method, a gradient-based sequential quadratic programming algorithm, and a simulated annealing global optimization routine. The results showed that the simulated annealing algorithm performed for superior to the conventional routines by converging more rapidly and avoiding local minima.

  5. Driver Hamiltonians for constrained optimization in quantum annealing

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Sarandy, Marcelo S.

    2016-06-01

    One of the current major challenges surrounding the use of quantum annealers for solving practical optimization problems is their inability to encode even moderately sized problems, the main reason for this being the rigid layout of their quantum bits as well as their sparse connectivity. In particular, the implementation of constraints has become a major bottleneck in the embedding of practical problems, because the latter is typically achieved by adding harmful penalty terms to the problem Hamiltonian, a technique that often requires an all-to-all connectivity between the qubits. Recently, a novel technique designed to obviate the need for penalty terms was suggested; it is based on the construction of driver Hamiltonians that commute with the constraints of the problem, rendering the latter constants of motion. In this work we propose general guidelines for the construction of such driver Hamiltonians given an arbitrary set of constraints. We illustrate the broad applicability of our method by analyzing several diverse examples, namely, graph isomorphism, not-all-equal three-satisfiability, and the so-called Lechner-Hauke-Zoller constraints. We also discuss the significance of our approach in the context of current and future experimental quantum annealers.

  6. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  7. Optimization of Temperatures Heating Melt and Annealing Soft Magnetic Alloys

    NASA Astrophysics Data System (ADS)

    Tsepelev, Vladimir; Starodubtsev, Yuri

    2017-05-01

    Taking into account the concept of the quasi-chemical model of the liquid micro-non-uniform composition and the research made on the physical properties of the Fe-based melts being crystallized, the unique technology of the melt time-temperature treatment has been developed. Amorphous ribbons produced using this technology require optimal annealing temperatures to be specifically selected. Temperature dependences of the kinematic viscosity of a multicomponent Fe72.5Cu1Nb2Mo1.5Si14B9 melt have been studied. A critical temperature is detected above which the activation energy of viscous flow of the melt changes. Upon cooling the overheated melt, the temperature curves of the kinematic viscosity become linear within the given coordinates. In amorphous ribbon produced in the mode with overheating the melt above the critical temperature, the enthalpy of crystallization grows, the following heat treatment results in an increase in magnetic permeability.

  8. Algorithms for optimal redundancy allocation

    SciTech Connect

    Vandenkieboom, J.; Youngblood, R.

    1993-01-01

    Heuristic and exact methods for solving the redundancy allocation problem are compared to an approach based on genetic algorithms. The various methods are applied to the bridge problem, which has been used as a benchmark in earlier work on optimization methods. Comparisons are presented in terms of the best configuration found by each method, and the computation effort which was necessary in order to find it.

  9. Evaluation of a particle swarm algorithm for biomechanical optimization.

    PubMed

    Schutte, Jaco F; Koh, Byung-Il; Reinbolt, Jeffrey A; Haftka, Raphael T; George, Alan D; Fregly, Benjamin J

    2005-06-01

    Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm's global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms--a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units.

  10. Obstacle Bypassing in Optimal Ship Routing Using Simulated Annealing

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    In this paper we are going to discuss a variation on the problem of finding the shortest path between two points in optimal ship routing problems consisting of obstacles that are not allowed to be crossed by the path. Our main goal are going to be the construction of an appropriate algorithm, based in an earlier work by computing the shortest path between two points in the plane that avoids a set of polygonal obstacles.

  11. Designing a diffractive optical element for controlling the beam profile in a three-dimensional space using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Wen-xi; Zhang, Jing-juan; Lü, Jun-feng; Liao, Rui

    2001-12-01

    We have designed a spatially quantized diffractive optical element (DOE) for controlling the beam profile in a three-dimensional space with the help of the simulated annealing (SA) algorithm. In this paper, we investigate the annealing schedule and the neighbourhood which are the deterministic parameters of the process that warrant the quality of the SA algorithm. The algorithm is employed to solve the discrete stochastic optimization problem of the design of a DOE. The objective function which constrains the optimization is also studied. The computed results demonstrate that the procedure of the algorithm converges stably to an optimal solution close to the global optimum with an acceptable computing time. The results meet the design requirement well and are applicable.

  12. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  13. Simulated annealing versus quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    Based on simulated classical annealing and simulated quantum annealing using quantum Monte Carlo (QMC) simulations I will explore the question where physical or simulated quantum annealers may outperform classical optimization algorithms. Although the stochastic dynamics of QMC simulations is not the same as the unitary dynamics of a quantum system, I will first show that for the problem of quantum tunneling between two local minima both QMC simulations and a physical system exhibit the same scaling of tunneling times with barrier height. The scaling in both cases is O (Δ2) , where Δ is the tunneling splitting. An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, one obtains a quadratic speedup for QMC, and achieve linear scaling in Δ. I will then address the apparent contradiction between experiments on a D-Wave 2 system that failed to see evidence of quantum speedup and previous QMC results that indicated an advantage of quantum annealing over classical annealing for spin glasses. We find that this contradiction is resolved by taking the continuous time limit in the QMC simulations which then agree with the experimentally observed behavior and show no speedup for 2D spin glasses. However, QMC simulations with large time steps gain further advantage: they ``cheat'' by ignoring what happens during a (large) time step, and can thus outperform both simulated quantum annealers and classical annealers. I will then address the question how to optimally run a simulated or physical quantum annealer. Investigating the behavior of the tails of the distribution of runtimes for very hard instances we find that adiabatically slow annealing is far from optimal. On the contrary, many repeated relatively fast annealing runs can be orders of magnitude faster for

  14. Comparative Analysis of Simulated Annealing (SA) and Simplified Generalized SA (SGSA) for Estimation Optimal of Parametric Functional in CATIVIC

    SciTech Connect

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    2009-08-13

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  15. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  16. Morphological Optimization of Perovskite Thin Films via Dynamic Zone Annealing

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Wang, Kai; Gong, Xiong; Karim, Alamgir

    2015-03-01

    Organolead Halide Perovskites have been proved to be excellent candidates for application in low-cost high-efficient solar cells owing to their superior desired optical and electrical properties, as well as compatibility with low-temperature solution-processed manufacturing. However, most perovskites applications in photovoltaics require high quality perovskite films. Although tremendous works on tuning perovskite film morphology have been reported previously, it is still a challenge to realize high quality perovskite film with controllable film uniformity and surface coverage, neither the mechanisms in the formation of perovskite. To address the issues above, here we demonstrate the effect of Dynamic Zone Annealing (DZA) on perovskite morphologies, which is proved as an efficient method to control the structure and morphology in crystalline polymer and block copolymers. Via applying the DZA method, the mechanism in perovskite film formation is studied. Furthermore, by optimizing DZA parameter such as maximum temperature, temperature gradient and zone velocity to control dendritic morphology and the grain growth, enhanced device performance was realized eventually. Equal contribution.

  17. Approximate algorithms for fast optimal attitude computation

    NASA Technical Reports Server (NTRS)

    Shuster, M. D.

    1978-01-01

    Fast accurate algorithms are presented for computing an optimal attitude which minimizes a quadratic loss function. These algorithms compute an optimal rotation which carries a set of reference vectors into a set of corresponding observation vectors. Simplifications of these algorithms are obtained for the case of small rotation angles. Applications to the Magsat mission are discussed.

  18. Designing a binary Fourier-phase-only correlation filter using a simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Takanori; Nagase, Takamitsu; Itoh, Kazuyoshi; Ichioka, Yoshiki

    1990-11-01

    A method of designing a multiple-object recognition filter using a simulated annealing algorithm is presented. This filter has only binary phase information and is correlated with a binary Fourier-phase component of a test object. The filter is used in an incoherent-optical/digital-electric hybrid system. Computer simulations confirm the superior performance of this filter.

  19. Optimal Multistage Algorithm for Adjoint Computation

    SciTech Connect

    Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves

    2016-01-01

    We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.

  20. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  1. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  2. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  3. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  4. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  5. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  6. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could

  7. An Optimal Class Association Rule Algorithm

    NASA Astrophysics Data System (ADS)

    Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie

    Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.

  8. Optimal Fungal Space Searching Algorithms.

    PubMed

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  9. Intelligent perturbation algorithms to space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1991-01-01

    The limited availability and high cost of crew time and scarce resources make optimization of space operations critical. Advances in computer technology coupled with new iterative search techniques permit the near optimization of complex scheduling problems that were previously considered computationally intractable. Described here is a class of search techniques called Intelligent Perturbation Algorithms. Several scheduling systems which use these algorithms to optimize the scheduling of space crew, payload, and resource operations are also discussed.

  10. Engineering phase shifter domains for multiple QPM using simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Siva, Chellappa; Sunder Meetei, Toijam; Shiva, Prabhakar; Narayanan, Balaji; Arvind, Ganesh; Boomadevi, Shanmugam; Pandiyan, Krishnamoorthy

    2017-10-01

    We have utilized the general algorithm of simulated annealing (SA) to engineer the phase shifter domains in a quasi-phase-matching (QPM) device to generate multiple frequency conversion. SA is an algorithm generally used to find the global maxima or minima in a given random function. Here, we have utilized this algorithm to generate multiple QPM second harmonic generation (SHG) by distributing phase shifters suitably. In general, phase shifters are distributed in a QPM device with some specific profile along the length to generate multiple QPM SHG. Using the SA algorithm, the location of these phase shifters can be easily identified to have the desired multiple QPM with higher conversion efficiency. The methodology to generate the desired multiple QPM SHG using the SA algorithm has been discussed in detail.

  11. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  12. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  13. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  14. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  15. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  16. Optimal actuator placement on an active reflector using a modified simulated annealing technique

    NASA Technical Reports Server (NTRS)

    Kuo, Chin-Po; Bruno, Robin

    1991-01-01

    The development of a lightweight actuation system for maintaining the surface accuracy of a composite honeycomb panel using piezoelectric actuators is discussed. A modified simulated annealing technique is used to optimize the problem with both combinatorial and continuous criteria and with inequality constraints. Near optimal solutions for the location of the actuators, using combinatorial optimization, and for the required actuator forces, employing continuous optimization, are sought by means of the modified simulated annealing technique. The actuator locations are determined by first seeking a near optimum solution using the modified simulated annealing technique. The final actuator configuration consists of an arrangement wherein the piezoelectric actuators are placed along six radial lines. Numerical results showing the achievable surface correction by means of this configuration are presented.

  17. Stochastic optimization algorithms for barrier dividend strategies

    NASA Astrophysics Data System (ADS)

    Yin, G.; Song, Q. S.; Yang, H.

    2009-01-01

    This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.

  18. TOPICAL REVIEW: Optimization using quantum mechanics: quantum annealing through adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Santoro, Giuseppe E.; Tosatti, Erio

    2006-09-01

    We review here some recent work in the field of quantum annealing, alias adiabatic quantum computation. The idea of quantum annealing is to perform optimization by a quantum adiabatic evolution which tracks the ground state of a suitable time-dependent Hamiltonian, where 'planck' is slowly switched off. We illustrate several applications of quantum annealing strategies, starting from textbook toy-models—double-well potentials and other one-dimensional examples, with and without disorder. These examples display in a clear way the crucial differences between classical and quantum annealing. We then discuss applications of quantum annealing to challenging hard optimization problems, such as the random Ising model, the travelling salesman problem and Boolean satisfiability problems. The techniques used to implement quantum annealing are either deterministic Schrödinger's evolutions, for the toy models, or path-integral Monte Carlo and Green's function Monte Carlo approaches, for the hard optimization problems. The crucial role played by disorder and the associated non-trivial Landau-Zener tunnelling phenomena is discussed and emphasized.

  19. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  20. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  1. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  2. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  3. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  4. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  5. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  6. 2D Ultrasound Sparse Arrays Multi-Depth Radiation Optimization Using Simulated Annealing and Spiral-Array Inspired Energy Functions.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc; Liebgott, Herve

    2016-08-24

    Full matrix arrays are excellent tools for 3D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is here faced by introducing novel energy functions in the simulated annealing algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points and depths as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to- or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of simulated annealing allow the duration of the optimization process to be set in advance.

  7. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  8. Angelic Hierarchical Planning: Optimal and Online Algorithms

    DTIC Science & Technology

    2008-12-06

    describe an alternative “satisficing” algorithm, AHSS . 4.1 Abstract Lookahead Trees Our ALT data structures support our search algorithms by efficiently...Angelic Hierarchical Satisficing Search ( AHSS ), which at- tempts to find a plan that reaches the goal with at most some pre-specified cost α. AHSS can be...much more efficient than AHA*, since it can commit to a plan without first proving its optimality. At each step, AHSS (see Algorithm 3) begins by

  9. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  10. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  11. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  12. Optimizing connected component labeling algorithms

    SciTech Connect

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-01-16

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 {approx} 100 in our tests on random binary images.

  13. Optimizing connected component labeling algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-04-01

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.

  14. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  15. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  16. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  17. An algorithm for online optimization of accelerators

    SciTech Connect

    Huang, Xiaobiao; Corbett, Jeff; Safranek, James; Wu, Juhao

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  18. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  19. Effective 3D protein structure prediction with local adjustment genetic-annealing algorithm.

    PubMed

    Zhang, Xiao-Long; Lin, Xiao-Li

    2010-09-01

    The protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing the energy function. The protein folding structure prediction is computationally challenging and has been shown to be NP-hard problem when the 3D off-lattice AB model is employed. In this paper, the local adjustment genetic-annealing (LAGA) algorithm was used to search the ground state of 3D offlattice AB model for protein folding structure. The algorithm included an improved crossover strategy and an improved mutation strategy, where a local adjustment strategy was also used to enhance the searching ability. The experiments were carried out with the Fibonacci sequences. The experimental results demonstrate that the LAGA algorithm appears to have better performance and accuracy compared to the previous methods.

  20. Optimal Hops-Based Adaptive Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  1. An algorithm for LQ optimal actuator location

    NASA Astrophysics Data System (ADS)

    Darivandi, Neda; Morris, Kirsten; Khajepour, Amir

    2013-03-01

    The locations of the control hardware are typically a design variable in controller design for distributed parameter systems. In order to obtain the most efficient control system, the locations of control hardware as well as the feedback gain should be optimized. These optimization problems are generally non-convex. In addition, the models for these systems typically have a large number of degrees of freedom. Consequently, existing optimization schemes for optimal actuator placement may be inaccurate or computationally impractical. In this paper, the feedback control is chosen to be an optimal linear quadratic regulator. The optimal actuator location problem is reformulated as a convex optimization problem. A subgradient-based optimization scheme which leads to the global solution of the problem is used to optimize actuator locations. The optimization algorithm is applied to optimize the placement of piezoelectric actuators in vibration control of flexible structures. This method is compared with a genetic algorithm, and is observed to be faster and more accurate. Experiments are performed to verify the efficacy of optimal actuator placement.

  2. Particle Swarm Optimization algorithms for geophysical inversion, practical hints

    NASA Astrophysics Data System (ADS)

    Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.

    2008-12-01

    PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.

  3. Reference energy extremal optimization: a stochastic search algorithm applied to computational protein design.

    PubMed

    Zhang, Naigong; Zeng, Chen

    2008-08-01

    We adapt a combinatorial optimization algorithm, extremal optimization (EO), for the search problem in computational protein design. This algorithm takes advantage of the knowledge of local energy information and systematically improves on the residues that have high local energies. Power-law probability distributions are used to select the backbone sites to be improved on and the rotamer choices to be changed to. We compare this method with simulated annealing (SA) and motivate and present an improved method, which we call reference energy extremal optimization (REEO). REEO uses reference energies to convert a problem with a structured local-energy profile to one with more random profile, and extremal optimization proves to be extremely efficient for the latter problem. We show in detail the large improvement we have achieved using REEO as compared to simulated annealing and discuss a number of other heuristics we have attempted to date. 2008 Wiley Periodicals, Inc.

  4. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  5. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  6. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  7. Investigation of intensity-modulated radiotherapy optimization with gEUD-based objectives by means of simulated annealing.

    PubMed

    Hartmann, Matthias; Bogner, Ludwig

    2008-05-01

    Inverse treatment planning of intensity-modulated radiation therapy (IMRT) is complicated by several sources of error, which can cause deviations of optimized plans from the true optimal solution. These errors include the systematic and convergence error, the local minima error, and the optimizer convergence error. We minimize these errors by developing an inverse IMRT treatment planning system with a Monte Carlo based dose engine and a simulated annealing search engine as well as a deterministic search engine. In addition, different generalized equivalent uniform dose (gEUD)-based and hybrid objective functions were implemented and investigated with simulated annealing. By means of a head-and-neck IMRT case we have analyzed the properties of these gEUD-based objective functions, including its search space and the existence of local optima errors. We found evidence that the use of a previously published investigation of a gEUD-based objective function results in an uncommon search space with a golf hole structure. This special search space structure leads to trapping in local minima, making it extremely difficult to identify the true global minimum, even when using stochastic search engines. Moreover, for the same IMRT case several local optima have been detected by comparing the solutions of 100 different trials using a gradient optimization algorithm with the global optimum computed by simulated annealing. We have demonstrated that the hybrid objective function, which includes dose-based objectives for the target and gEUD-based objectives for normal tissue, results in equally good sparing of the critical structures as for the pure gEUD objective function and lower target dose maxima.

  8. An eco-environmental water demand based model for optimising water resources using hybrid genetic simulated annealing algorithms. Part I. Model development.

    PubMed

    Wang, Xiaoling; Sun, Yuefeng; Song, Lingguang; Mei, Chuanshu

    2009-06-01

    We propose here an improved multi-objective optimisation model that considers eco-environmental water demand (EWD) for allocating water resources in a river basin over the long term. The model considers economic, social, and environmental objectives, and it improves on traditional optimisation methods by emphasizing not only the water demand of the artificial ecosystem but also that of the natural ecosystem. Water resource constraints are considered. The hybrid genetic simulated annealing algorithms (HGSAA) technique incorporates a genetic algorithm (GA) and a simulated annealing (SA) algorithm, which have strong local and global searching abilities, in order to solve the highly non-linear model and avoid local and pre-mature convergence. In the method, the water demands of users in the planning year serve as the basis for long-term optimisation using a forecasting procedure. In this study, the combined forecasting method based on the principle of optimal combination is built to forecast domestic and industrial water demands. The proposed model and method are subsequently used in a companion paper to optimise water allocation in the Haihe River basin in China [An eco-environmental water demand based model for optimising water resources using hybrid genetic simulated annealing algorithms. Part II. Model application and results 90 (8), 2612-2619].

  9. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  10. Optimization of Cold Rolling and Subsequent Annealing Treatment on Mechanical Properties of TWIP Steel

    NASA Astrophysics Data System (ADS)

    Zamani, D.; Golshan, A.; Dini, G.; Ismarrubie, Z. N.; Azmah Hanim, M. A.; Sajuri, Z.

    2017-08-01

    This research work studied the effect of cold rolling reduction and subsequent annealing temperature on the microstructural evolution and the mechanical properties of Fe-32Mn-4Si-2Al twinning-induced plasticity steel plates. For this, uniaxial tensile tests were carried out for three cold rolling reductions (50, 65 and 80%) and subsequent annealing treatment at 550-750 °C for 1.8 ks. The results were discussed in terms of the yield strength, ultimate tensile strength and total elongation and its dependence on the introduced microstructure. Regression analysis was used to develop the mathematical models of the mechanical properties. Moreover, analysis of variance was employed to verify the precision of the mathematical models. Finally, desirability function was used as an effective optimization approach for multi-objective optimization of the cold rolling reduction and annealing temperature. It is considerable that there is no research attempting to find optimum mechanical properties of the steels using this approach. The results indicated that applying large cold rolling reduction (upper than 75%) and subsequent annealing treatment in the recovery region and also the application of large cold rolling reduction and the subsequent annealing treatment in the lower limit of partial recrystallization region were effective methods to obtain an excellent combination of mechanical properties.

  11. A deterministic annealing algorithm for approximating a solution of the linearly constrained nonconvex quadratic minimization problem.

    PubMed

    Dang, Chuangyin; Liang, Jianqing; Yang, Yang

    2013-03-01

    A deterministic annealing algorithm is proposed for approximating a solution of the linearly constrained nonconvex quadratic minimization problem. The algorithm is derived from applications of a Hopfield-type barrier function in dealing with box constraints and Lagrange multipliers in handling linear equality constraints, and attempts to obtain a solution of good quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the box constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem. Preliminary numerical results show that the algorithm seems effective and efficient. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Comparison between simulated annealing algorithms and rapid chain delineation in the construction of genetic maps

    PubMed Central

    2010-01-01

    The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F2 co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501

  13. Reconstruction of the vertical electron density profile based on vertical TEC using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong

    2016-05-01

    This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.

  14. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  15. Protein structure optimization with a "Lamarckian" ant colony algorithm.

    PubMed

    Oakley, Mark T; Richardson, E Grace; Carr, Harriet; Johnston, Roy L

    2013-01-01

    We describe the LamarckiAnt algorithm: a search algorithm that combines the features of a "Lamarckian" genetic algorithm and ant colony optimization. We have implemented this algorithm for the optimization of BLN model proteins, which have frustrated energy landscapes and represent a challenge for global optimization algorithms. We demonstrate that LamarckiAnt performs competitively with other state-of-the-art optimization algorithms.

  16. Comparison of several stochastic parallel optimization algorithms for adaptive optics system without a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Li, Xinyang

    2011-04-01

    Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.

  17. Enhanced magnetic refrigeration properties in Mn-rich Ni-Mn-Sn ribbons by optimal annealing

    PubMed Central

    Zhang, Yu; Zhang, Linlin; Zheng, Qiang; Zheng, Xinqi; Li, Ming; Du, Juan; Yan, Aru

    2015-01-01

    The influence of annealing time on temperature range of martensitic phase transition (ΔTA-M), thermal hysteresis (ΔThys), magnetic hysteresis loss (ΔMhys), magnetic entropy change (ΔSM) and relative refrigeration capacity (RC) of the Mn-rich Ni43Mn46Sn11 melt spun ribbons have been systematically studied. By optimal annealing, an extremely large ΔSM of 43.2 J.kg−1K−1 and a maximum RC of 221.0 J.kg−1 could be obtained respectively in a field change of 5 T. Both ΔTA-M and ΔThys decreases after annealing, while ΔMhys and ΔSM first dramatically increase to a maximum then degenerates as increase of annealing time. A large effective cooling capacity (RCeff) of 115.4 J.kg−1 was achieved in 60 min annealed ribbons, which increased 75% compared with that unannealed ribbons. The evolution of magnetic properties and magnetocaloric effect has been discussed and proved by atomic ordering degree, microstructure and composition analysis. PMID:26055884

  18. Combinatorial Multiobjective Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Martin. Eric T.

    2002-01-01

    The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.

  19. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  20. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  1. Algorithm for fixed-range optimal trajectories

    NASA Technical Reports Server (NTRS)

    Lee, H. Q.; Erzberger, H.

    1980-01-01

    An algorithm for synthesizing optimal aircraft trajectories for specified range was developed and implemented in a computer program written in FORTRAN IV. The algorithm, its computer implementation, and a set of example optimum trajectories for the Boeing 727-100 aircraft are described. The algorithm optimizes trajectories with respect to a cost function that is the weighted sum of fuel cost and time cost. The optimum trajectory consists at most of a three segments: climb, cruise, and descent. The climb and descent profiles are generated by integrating a simplified set of kinematic and dynamic equations wherein the total energy of the aircraft is the independent or time like variable. At each energy level the optimum airspeeds and thrust settings are obtained as the values that minimize the variational Hamiltonian. Although the emphasis is on an off-line, open-loop computation, eventually the most important application will be in an on-board flight management system.

  2. Optimized TRIAD Algorithm for Attitude Determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    1996-01-01

    TRIAD is a well known simple algorithm that generates the attitude matrix between two coordinate systems when the components of two abstract vectors are given in the two systems. TRIAD however, is sensitive to the order in which the algorithm handles the vectors, such that the resulting attitude matrix is influenced more by the vector processed first. In this work we present a new algorithm, which we call Optimized TRIAD, that blends in a specified manner the two matrices generated by TRIAD when processing one vector first, and then when processing the other vector first. On the average, Optimized TRIAD yields a matrix which is better than either one of the two matrices in that is ti the closest to the correct matrix. This result is demonstrated through simulation.

  3. Alpha-plane based automatic general type-2 fuzzy clustering based on simulated annealing meta-heuristic algorithm for analyzing gene expression data.

    PubMed

    Doostparast Torshizi, Abolfazl; Fazel Zarandi, Mohammad Hossein

    2015-09-01

    This paper considers microarray gene expression data clustering using a novel two stage meta-heuristic algorithm based on the concept of α-planes in general type-2 fuzzy sets. The main aim of this research is to present a powerful data clustering approach capable of dealing with highly uncertain environments. In this regard, first, a new objective function using α-planes for general type-2 fuzzy c-means clustering algorithm is represented. Then, based on the philosophy of the meta-heuristic optimization framework 'Simulated Annealing', a two stage optimization algorithm is proposed. The first stage of the proposed approach is devoted to the annealing process accompanied by its proposed perturbation mechanisms. After termination of the first stage, its output is inserted to the second stage where it is checked with other possible local optima through a heuristic algorithm. The output of this stage is then re-entered to the first stage until no better solution is obtained. The proposed approach has been evaluated using several synthesized datasets and three microarray gene expression datasets. Extensive experiments demonstrate the capabilities of the proposed approach compared with some of the state-of-the-art techniques in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. An efficient algorithm for numerical airfoil optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1979-01-01

    A new optimization algorithm is presented. The method is based on sequential application of a second-order Taylor's series approximation to the airfoil characteristics. Compared to previous methods, design efficiency improvements of more than a factor of 2 are demonstrated. If multiple optimizations are performed, the efficiency improvements are more dramatic due to the ability of the technique to utilize existing data. The method is demonstrated by application to subsonic and transonic airfoil design but is a general optimization technique and is not limited to a particular application or aerodynamic analysis.

  5. Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.

    PubMed

    Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A

    2016-04-12

    The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired.

  6. An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong

    2017-03-01

    Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.

  7. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  8. Anatomy-Based Inverse Planning Simulated Annealing Optimization in High-Dose-Rate Prostate Brachytherapy: Significant Dosimetric Advantage Over Other Optimization Techniques

    SciTech Connect

    Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry

    2008-11-01

    Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.

  9. Assessment of a fuzzy based flood forecasting system optimized by simulated annealing

    NASA Astrophysics Data System (ADS)

    Reyhani Masouleh, Aida; Pakosch, Sabine; Disse, Markus

    2010-05-01

    Flood forecasting is an important tool to mitigate harmful effects of floods. Among the many different approaches for forecasting, Fuzzy Logic (FL) is one that has been increasingly applied over the last decade. This method is principally based on the linguistic description of Rule Systems (RS). A RS is a specific combination of membership functions of input and output variables. Setting up the RS can be implemented either automatically or manually, the choice of which can strongly influence the resulting rule systems. It is therefore the objective of this study to assess the influence that the parameters of an automated rule generation based on Simulated Annealing (SA) have on the resulting RS. The study area is the upper Main River area, located in the northern part of Bavaria, Germany. The data of Mainleus gauge with area of 1165 km2 was investigated in the whole period of 1984 and 2004. The highest observed discharge of 357 m3/s was recorded in 1995. The input arguments of the FL model were daily precipitation, forecasted precipitation, antecedent precipitation index, temperature and melting rate. The FL model of this study has one output variable, daily discharge and was independently set up for three different forecast lead times, namely one-, two- and three-days ahead. In total, each RS comprised 55 rules and all input and output variables were represented by five sets of trapezoidal and triangular fuzzy numbers. Simulated Annealing, which is a converging optimum solution algorithm, was applied for optimizing the RSs in this study. In order to assess the influence of its parameters (number of iterations, temperature decrease rate, initial value for generating random numbers, initial temperature and two other parameters), they were individually varied while keeping the others fixed. With each of the resulting parameter sets, a full-automatic SA was applied to gain optimized fuzzy rule systems for flood forecasting. Evaluation of the performance of the

  10. 2-D Ultrasound Sparse Arrays Multidepth Radiation Optimization Using Simulated Annealing and Spiral-Array Inspired Energy Functions.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc C; Liebgott, Herve

    2016-12-01

    Full matrix arrays are excellent tools for 3-D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is faced here by introducing novel energy functions in the simulated annealing (SA) algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points, and depths, as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of SA allow the duration of the optimization process to be set in advance.

  11. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  12. Quantum Simulations of Classical Annealing Processes

    NASA Astrophysics Data System (ADS)

    Somma, R. D.; Boixo, S.; Barnum, H.; Knill, E.

    2008-09-01

    We describe a quantum algorithm that solves combinatorial optimization problems by quantum simulation of a classical simulated annealing process. Our algorithm exploits quantum walks and the quantum Zeno effect induced by evolution randomization. It requires order 1/δ steps to find an optimal solution with bounded error probability, where δ is the minimum spectral gap of the stochastic matrices used in the classical annealing process. This is a quadratic improvement over the order 1/δ steps required by the latter.

  13. Application of heuristic optimization techniques and algorithm tuning to multilayered sorptive barrier design.

    PubMed

    Matott, L Shawn; Bartelt-Hunt, Shannon L; Rabideau, Alan J; Fowler, K R

    2006-10-15

    Although heuristic optimization techniques are increasingly applied in environmental engineering applications, algorithm selection and configuration are often approached in an ad hoc fashion. In this study, the design of a multilayer sorptive barrier system served as a benchmark problem for evaluating several algorithm-tuning procedures, as applied to three global optimization techniques (genetic algorithms, simulated annealing, and particle swarm optimization). Each design problem was configured as a combinatorial optimization in which sorptive materials were selected for inclusion in a landfill liner to minimize the transport of three common organic contaminants. Relative to multilayer sorptive barrier design, study results indicate (i) the binary-coded genetic algorithm is highly efficient and requires minimal tuning, (ii) constraint violations must be carefully integrated to avoid poor algorithm convergence, and (iii) search algorithm performance is strongly influenced by the physical-chemical properties of the organic contaminants of concern. More generally, the results suggest that formal algorithm tuning, which has not been widely applied to environmental engineering optimization, can significantly improve algorithm performance and provide insight into the physical processes that control environmental systems.

  14. Polynomial Local Improvement Algorithms in Combinatorial Optimization.

    DTIC Science & Technology

    1981-11-01

    NUMBER SOL 81- 21 IIS -J O 15 14. TITLE (am#Su&Utl & YEO RPR ERO OEE Polynomial Local Improvement Algorithms in TcnclRpr Combinatorial Optimization 6...Stanford, CA 94305 II . CONTROLLING OFFICE NAME AND ADDRESS It. REPORT DATE Office of Naval Research - Dept. of the Navy November 1981 800 N. Qu~incy Street...corresponds to a node of the tree. ii ) The father of a vertex is its optimal adjacent vertex; if a vertex is a local optimum, it has no father. The tree is

  15. FOGSAA: Fast Optimal Global Sequence Alignment Algorithm

    NASA Astrophysics Data System (ADS)

    Chakraborty, Angana; Bandyopadhyay, Sanghamitra

    2013-04-01

    In this article we propose a Fast Optimal Global Sequence Alignment Algorithm, FOGSAA, which aligns a pair of nucleotide/protein sequences faster than any optimal global alignment method including the widely used Needleman-Wunsch (NW) algorithm. FOGSAA is applicable for all types of sequences, with any scoring scheme, and with or without affine gap penalty. Compared to NW, FOGSAA achieves a time gain of (70-90)% for highly similar nucleotide sequences (> 80% similarity), and (54-70)% for sequences having (30-80)% similarity. For other sequences, it terminates with an approximate score. For protein sequences, the average time gain is between (25-40)%. Compared to three heuristic global alignment methods, the quality of alignment is improved by about 23%-53%. FOGSAA is, in general, suitable for aligning any two sequences defined over a finite alphabet set, where the quality of the global alignment is of supreme importance.

  16. Intelligent perturbation algorithms for space scheduling optimization

    NASA Technical Reports Server (NTRS)

    Kurtzman, Clifford R.

    1990-01-01

    The optimization of space operations is examined in the light of optimization heuristics for computer algorithms and iterative search techniques. Specific attention is given to the search concepts known collectively as intelligent perturbation algorithms (IPAs) and their application to crew/resource allocation problems. IPAs iteratively examine successive schedules which become progressively more efficient, and the characteristics of good perturbation operators are listed. IPAs can be applied to aerospace systems to efficiently utilize crews, payloads, and resources in the context of systems such as Space-Station scheduling. A program is presented called the MFIVE Space Station Scheduling Worksheet which generates task assignments and resource usage structures. The IPAs can be used to develop flexible manifesting and scheduling for the Industrial Space Facility.

  17. Genetic algorithm optimization of atomic clusters

    SciTech Connect

    Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E. |

    1996-12-31

    The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process.

  18. Optical flow optimization using parallel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe

    2011-06-01

    A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.

  19. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  20. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  1. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  2. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  3. Optimizing the natural connectivity of scale-free networks using simulated annealing

    NASA Astrophysics Data System (ADS)

    Duan, Boping; Liu, Jing; Tang, Xianglong

    2016-09-01

    In real-world networks, the path between two nodes always plays a significant role in the fields of communication or transportation. In some cases, when one path fails, the two nodes cannot communicate any more. Thus, it is necessary to increase alternative paths between nodes. In the recent work (Wu et al., 2011), Wu et al. proposed the natural connectivity as a novel robustness measure of complex networks. The natural connectivity considers the redundancy of alternative paths in a network by computing the number of closed paths of all lengths. To enhance the robustness of networks in terms of the natural connectivity, in this paper, we propose a simulated annealing method to optimize the natural connectivity of scale-free networks without changing the degree distribution. The experimental results show that the simulated annealing method clearly outperforms other local search methods.

  4. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  5. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: A performance evaluation study.

    PubMed

    Vaitheeswaran, Ranganathan; Sathiya, Narayanan V K; Bhangle, Janhavi R; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-04-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm; (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm; (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) [~ 2% - 5% improvement] and Homogeneity Index (HI) [~ 4% - 10% improvement] as compared to GEM and FSA algorithms; (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  6. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  7. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  8. Unification of algorithms for minimum mode optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-01

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  9. Unification of algorithms for minimum mode optimization.

    PubMed

    Zeng, Yi; Xiao, Penghao; Henkelman, Graeme

    2014-01-28

    Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.

  10. An organizational evolutionary algorithm for numerical optimization.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2007-08-01

    Taking inspiration from the interacting process among organizations in human societies, this correspondence designs a kind of structured population and corresponding evolutionary operators to form a novel algorithm, Organizational Evolutionary Algorithm (OEA), for solving both unconstrained and constrained optimization problems. In OEA, a population consists of organizations, and an organization consists of individuals. All evolutionary operators are designed to simulate the interaction among organizations. In experiments, 15 unconstrained functions, 13 constrained functions, and 4 engineering design problems are used to validate the performance of OEA, and thorough comparisons are made between the OEA and the existing approaches. The results show that the OEA obtains good performances in both the solution quality and the computational cost. Moreover, for the constrained problems, the good performances are obtained by only incorporating two simple constraints handling techniques into the OEA. Furthermore, systematic analyses have been made on all parameters of the OEA. The results show that the OEA is quite robust and easy to use.

  11. Optimization of silver nanowire-based transparent electrodes: effects of density, size and thermal annealing.

    PubMed

    Lagrange, M; Langley, D P; Giusti, G; Jiménez, C; Bréchet, Y; Bellet, D

    2015-11-07

    Silver nanowire (AgNW) networks are efficient as flexible transparent electrodes, and are cheaper to fabricate than ITO (Indium Tin Oxide). Hence they are a serious competitor as an alternative to ITO in many applications such as solar cells, OLEDs, transparent heaters. Electrical and optical properties of AgNW networks deposited on glass are investigated in this study and an efficient method to optimize them is proposed. This paper relates network density, nanowire dimensions and thermal annealing directly to the physical properties of the nanowire networksusing original physical models. A fair agreement is found between experimental data and the proposed models. Moreover thermal stability of the nanowires is a key issue in thermal optimization of such networks and needs to be studied. In this work the impact of these four parameters on the networks physical properties are thoroughly investigated via in situ measurements and modelling, such a method being also applicable to other metallic nanowire networks. We demonstrate that this approach enables the optimization of both optical and electrical properties through modification of the junction resistance by thermal annealing, and a suitable choice of nanowire dimensions and network density. This work reports excellent optical and electrical properties of electrodes fabricated from AgNW networks with a transmittance T = 89.2% (at 550 nm) and a sheet resistance of Rs = 2.9 Ω □(-1), leading to the highest reported figure of merit.

  12. Optimization of silver nanowire-based transparent electrodes: effects of density, size and thermal annealing

    NASA Astrophysics Data System (ADS)

    Lagrange, M.; Langley, D. P.; Giusti, G.; Jiménez, C.; Bréchet, Y.; Bellet, D.

    2015-10-01

    Silver nanowire (AgNW) networks are efficient as flexible transparent electrodes, and are cheaper to fabricate than ITO (Indium Tin Oxide). Hence they are a serious competitor as an alternative to ITO in many applications such as solar cells, OLEDs, transparent heaters. Electrical and optical properties of AgNW networks deposited on glass are investigated in this study and an efficient method to optimize them is proposed. This paper relates network density, nanowire dimensions and thermal annealing directly to the physical properties of the nanowire networksusing original physical models. A fair agreement is found between experimental data and the proposed models. Moreover thermal stability of the nanowires is a key issue in thermal optimization of such networks and needs to be studied. In this work the impact of these four parameters on the networks physical properties are thoroughly investigated via in situ measurements and modelling, such a method being also applicable to other metallic nanowire networks. We demonstrate that this approach enables the optimization of both optical and electrical properties through modification of the junction resistance by thermal annealing, and a suitable choice of nanowire dimensions and network density. This work reports excellent optical and electrical properties of electrodes fabricated from AgNW networks with a transmittance T = 89.2% (at 550 nm) and a sheet resistance of Rs = 2.9 Ω □-1, leading to the highest reported figure of merit.

  13. An optimal annealing technique for ohmic contacts to ion-implanted n-layers in semi-insulating indium phosphide

    NASA Astrophysics Data System (ADS)

    Pande, K. P.; Martin, E.; Gutierrez, D.; Aina, O.

    1987-03-01

    An optimal annealing process was developed for sintering AuGe ohmic contacts to ion-implanted semi-insulating InP substrates. Contacts were annealed using a standard furnace, graphite strip heater and a lamp annealer. Alloying at 375°C for 3 min was found to be most suitable for achieving good contact morphology and lowest contact resistivity. Of the three techniques, the lamp annealing technique was found to give the best results when contacts were annealed under a SiO 2 cap. Contact resistivity as low as 8 × 10 -6 cm 2 was obtained for ion-implanted n+ layers in semi-insulating InP.

  14. Lunar Habitat Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.

  15. OPC recipe optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asthana, Abhishek; Wilkinson, Bill; Power, Dave

    2016-03-01

    Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.

  16. Using and comparing metaheuristic algorithms for optimizing bidding strategy viewpoint of profit maximization of generators

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan

    2015-12-01

    With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.

  17. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  18. Optimizing doped libraries by using genetic algorithms.

    PubMed

    Tomandl, D; Schober, A; Schwienhorst, A

    1997-01-01

    The insertion of random sequences into protein-encoding genes in combination with biological selection techniques has become a valuable tool in the design of molecules that have useful and possibly novel properties. By employing highly effective screening protocols, a functional and unique structure that had not been anticipated can be distinguished among a huge collection of inactive molecules that together represent all possible amino acid combinations. This technique is severely limited by its restriction to a library of manageable size. One approach for limiting the size of a mutant library relies on 'doping schemes', where subsets of amino acids are generated that reveal only certain combinations of amino acids in a protein sequence. Three mononucleotide mixtures for each codon concerned must be designed, such that the resulting codons that are assembled during chemical gene synthesis represent the desired amino acid mixture on the level of the translated protein. In this paper we present a doping algorithm that "reverse translates' a desired mixture of certain amino acids into three mixtures of mononucleotides. The algorithm is designed to optimally bias these mixtures towards the codons of choice. This approach combines a genetic algorithm with local optimization strategies based on the downhill simplex method. Disparate relative representations of all amino acids (and stop codons) within a target set can be generated. Optional weighing factors are employed to emphasize the frequencies of certain amino acids and their codon usage, and to compensate for reaction rates of different mononucleotide building blocks (synthons) during chemical DNA synthesis. The effect of statistical errors that accompany an experimental realization of calculated nucleotide mixtures on the generated mixtures of amino acids is simulated. These simulations show that the robustness of different optima with respect to small deviations from calculated values depends on their concomitant

  19. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  20. Simulation of the post-implantation anneal for emitter profile optimization in high efficiency c-Si solar cells

    NASA Astrophysics Data System (ADS)

    Florakis, A.; Vandervorst, W.; Janssens, T.; Rosseel, E.; Douhard, B.; Delmotte, J.; Cornagliotti, E.; Baert, K.; Posthuma, N.; Poortmans, J.

    2012-11-01

    The use of ion implantation to form local doping profiles in solar cells has regained interest, as it simplifies the manufacturing process, is litho-free, and high efficiency levels can be obtained [1]. However, residual damage after post annealing can lead to increased saturation current values (J0e), so optimization of the annealing process is required. A full simulation of the anneal treatment and its impact on profile and oxide formation was developed, based on the Synopsys Process TCAD software, for the boron doping case. The validity of the models and parameter set was experimentally verified through SIMS (Secondary Ion Mass Spectroscopy) profiles, sheet resistance and ellipsometry measurements.

  1. Application of a simulated annealing optimization to a physically based erosion model.

    PubMed

    Santos, C A G; Freire, P K M M; Arruda, P M

    2012-01-01

    A major risk concerning the calibration of physically based erosion models has been partly attributable to the lack of robust optimization tools. This paper presents the essential concepts and application to optimize the erosion parameters of an erosion model using data collected in an experimental basin, with a global optimization method known as simulated annealing (SA) which is suitable for solving optimization problems of large scales. The physically based erosion model that was chosen to be optimized here is the Watershed Erosion Simulation Program (WESP), which was developed for small basins to generate the hydrograph and the respective sedigraph. The field data were collected in an experimental basin located in a semiarid region of Brazil. On the basis of these results, the following erosion parameters were optimized: the soil moisture-tension parameter (N(s)) that depends also on the initial moisture content, the channel erosion parameter (a), the soil detachability factor (K(R)), and the sediment entrainment parameter by rainfall impact (K(I)), whose values could serve as initial estimates for semiarid regions within northeastern Brazil.

  2. Simulated annealing and metaheuristic for randomized priority search algorithms for the aerial refuelling parallel machine scheduling problem with due date-to-deadline windows and release times

    NASA Astrophysics Data System (ADS)

    Kaplan, Sezgin; Rabadi, Ghaith

    2013-01-01

    This article addresses the aerial refuelling scheduling problem (ARSP), where a set of fighter jets (jobs) with certain ready times must be refuelled from tankers (machines) by their due dates; otherwise, they reach a low fuel level (deadline) incurring a high cost. ARSP is an identical parallel machine scheduling problem with release times and due date-to-deadline windows to minimize the total weighted tardiness. A simulated annealing (SA) and metaheuristic for randomized priority search (Meta-RaPS) with the newly introduced composite dispatching rule, apparent piecewise tardiness cost with ready times (APTCR), are applied to the problem. Computational experiments compared the algorithms' solutions to optimal solutions for small problems and to each other for larger problems. To obtain optimal solutions, a mixed integer program with a piecewise weighted tardiness objective function was solved for up to 12 jobs. The results show that Meta-RaPS performs better in terms of average relative error but SA is more efficient.

  3. Expedite Particle Swarm Optimization Algorithm (EPSO) for Optimization of MSA

    NASA Astrophysics Data System (ADS)

    Rathi, Amit; Vijay, Ritu

    This paper presents a new designing method of Rectangular patch Microstrip Antenna using an Artificial searches Algorithm with some constraints. It requires two stages for designing. In first stage, bandwidth of MSA is modeled using bench Mark function. In second stage, output of first stage give to modified Artificial search Algorithm which is Particle Swarm Algorithm (PSO) as input and get output in the form of five parameter- dimensions width, frequency range, dielectric loss tangent, length over a ground plane with a substrate thickness and electrical thickness. In PSO Cognition, factor and Social learning Factor give very important effect on balancing the local search and global search in PSO. Basing the modification of cognition factor and social learning factor, this paper presents the strategy that at the starting process cognition-learning factor has more effect then social learning factor. Gradually social learning factor has more impact after learning cognition factor for find out global best. The aim is to find out under above circumstances these modifications in PSO can give better result for optimization of microstrip Antenna (MSA).

  4. PDE Nozzle Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Billings, Dana; Turner, James E. (Technical Monitor)

    2000-01-01

    Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.

  5. Optimizing doped libraries by using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Tomandl, Dirk; Schober, Andreas; Schwienhorst, Andreas

    1997-01-01

    The insertion of random sequences into protein-encoding genes in combination with biologicalselection techniques has become a valuable tool in the design of molecules that have usefuland possibly novel properties. By employing highly effective screening protocols, a functionaland unique structure that had not been anticipated can be distinguished among a hugecollection of inactive molecules that together represent all possible amino acid combinations.This technique is severely limited by its restriction to a library of manageable size. Oneapproach for limiting the size of a mutant library relies on `doping schemes', where subsetsof amino acids are generated that reveal only certain combinations of amino acids in a proteinsequence. Three mononucleotide mixtures for each codon concerned must be designed, suchthat the resulting codons that are assembled during chemical gene synthesis represent thedesired amino acid mixture on the level of the translated protein. In this paper we present adoping algorithm that `reverse translates' a desired mixture of certain amino acids into threemixtures of mononucleotides. The algorithm is designed to optimally bias these mixturestowards the codons of choice. This approach combines a genetic algorithm with localoptimization strategies based on the downhill simplex method. Disparate relativerepresentations of all amino acids (and stop codons) within a target set can be generated.Optional weighing factors are employed to emphasize the frequencies of certain amino acidsand their codon usage, and to compensate for reaction rates of different mononucleotidebuilding blocks (synthons) during chemical DNA synthesis. The effect of statistical errors thataccompany an experimental realization of calculated nucleotide mixtures on the generatedmixtures of amino acids is simulated. These simulations show that the robustness of differentoptima with respect to small deviations from calculated values depends on their concomitantfitness. Furthermore

  6. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  8. Theory and Algorithms for Global/Local Design Optimization

    DTIC Science & Technology

    2005-09-29

    algorithm with memory for optimal design of laminated sandwich composite panels ", Composite Structures, 58 (2002) 513-520. V. B. Gantovnik, Z. Giirdal, L...34, AIAA J., 43 (2005) 1844-1849. D. B. Adams, L. T. Watson, and Z. Gilrdal, " Optimization and blending of composite laminates using genetic algorithms ...Anderson-Cook, " Genetic algorithm optimization and blending of composite laminates by locally

  9. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  10. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  11. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  12. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  13. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  14. a New Multimodal Multi-Criteria Route Planning Model by Integrating a Fuzzy-Ahp Weighting Method and a Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Ghaderi, F.; Pahlavani, P.

    2015-12-01

    A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.

  15. Determining residual reduction algorithm kinematic tracking weights for a sidestep cut via numerical optimization.

    PubMed

    Samaan, Michael A; Weinhandl, Joshua T; Bawab, Sebastian Y; Ringleb, Stacie I

    2016-12-01

    Musculoskeletal modeling allows for the determination of various parameters during dynamic maneuvers by using in vivo kinematic and ground reaction force (GRF) data as inputs. Differences between experimental and model marker data and inconsistencies in the GRFs applied to these musculoskeletal models may not produce accurate simulations. Therefore, residual forces and moments are applied to these models in order to reduce these differences. Numerical optimization techniques can be used to determine optimal tracking weights of each degree of freedom of a musculoskeletal model in order to reduce differences between the experimental and model marker data as well as residual forces and moments. In this study, the particle swarm optimization (PSO) and simplex simulated annealing (SIMPSA) algorithms were used to determine optimal tracking weights for the simulation of a sidestep cut. The PSO and SIMPSA algorithms were able to produce model kinematics that were within 1.4° of experimental kinematics with residual forces and moments of less than 10 N and 18 Nm, respectively. The PSO algorithm was able to replicate the experimental kinematic data more closely and produce more dynamically consistent kinematic data for a sidestep cut compared to the SIMPSA algorithm. Future studies should use external optimization routines to determine dynamically consistent kinematic data and report the differences between experimental and model data for these musculoskeletal simulations.

  16. Honey Bees Inspired Optimization Method: The Bees Algorithm

    PubMed Central

    Yuce, Baris; Packianather, Michael S.; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-01-01

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528

  17. Algorithm for correcting optimization convergence errors in Eclipse.

    PubMed

    Zacarias, Albert S; Mills, Michael D

    2009-10-14

    IMRT plans generated in Eclipse use a fast algorithm to evaluate dose for optimization and a more accurate algorithm for a final dose calculation, the Analytical Anisotropic Algorithm. The use of a fast optimization algorithm introduces optimization convergence errors into an IMRT plan. Eclipse has a feature where optimization may be performed on top of an existing base plan. This feature allows for the possibility of arriving at a recursive solution to optimization that relies on the accuracy of the final dose calculation algorithm and not the optimizer algorithm. When an IMRT plan is used as a base plan for a second optimization, the second optimization can compensate for heterogeneity and modulator errors in the original base plan. Plans with the same field arrangement as the initial base plan may be added together by adding the initial plan optimal fluence to the dose correcting plan optimal fluence.A simple procedure to correct for optimization errors is presented that may be implemented in the Eclipse treatment planning system, along with an Excel spreadsheet to add optimized fluence maps together.

  18. A simulated annealing approach to schedule optimization for the SES facility

    NASA Technical Reports Server (NTRS)

    Mcmahon, Mary Beth; Dean, Jack

    1992-01-01

    The Shuttle Engineering Simulator (SES) is a facility which houses the software and hardware for a variety of simulation systems. The simulators include the Autonomous Remote Manipulator, the Manned Maneuvering Unit, Orbiter/Space Station docking, and shuttle entry and landing. The SES simulators are used by various groups throughout NASA. For example, astronauts use the SES to practice maneuvers with the shuttle equipment; programmers use the SES to test flight software; and engineers use the SES for design and analysis studies. Due to its high demand, the SES is busy twenty-four hours a day and seven days a week. Scheduling the facility is a problem that is constantly growing and changing with the addition of new equipment. Currently a number of small independent programs have been developed to help solve the problem, but the long-term answer lies in finding a flexible, integrated system that provides the user with the ability to create, optimize, and edit the schedule. COMPASS is an interactive and highly flexible scheduling system. However, until recently COMPASS did not provide any optimization features. This paper describes the simulated annealing extension to COMPASS. It now allows the user to interweave schedule creation, revision, and optimization. This practical approach was necessary in order to satisfy the operational requirements of the SES.

  19. Optimal pumping from Palmela water supply wells (Portugal) using simulated annealing

    NASA Astrophysics Data System (ADS)

    Fragoso, Teresa; Cunha, Maria Da Conceição; Lobo-Ferreira, João P.

    2009-12-01

    Aquifer systems are an important part of an integrated water resources management plan as foreseen in the European Union’s Water Framework Directive (2000). The sustainable development of these systems demands the use of all available techniques capable of handling the multidisciplinary features of the problems involved. The formulation and resolution of an optimization model is described for a planning and management problem based on the Palmela aquifer (Portugal), developed to supply a given number of demand centres. This problem is solved using one of the latest optimization techniques, the simulated annealing heuristic method, designed to find the optimal solutions while avoiding falling into local optimums. The solution obtained, providing the wells location and the corresponding pumped flows to supply each centre, are analysed taking into account the objective function components and the constraints. It was found that the operation cost is the biggest share of the final cost, and the choice of wells is greatly affected by this fact. Another conclusion is that the solution takes advantage of the economies of scale, that is, it points toward drilling a large capacity well even if this increases the investment cost, rather than drilling several wells, which together will increase the operation costs.

  20. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  1. Experimental demonstration of a quantum annealing algorithm for the traveling salesman problem in a nuclear-magnetic-resonance quantum simulator

    SciTech Connect

    Chen Hongwei; Kong Xi; Qin Gan; Zhou Xianyi; Peng Xinhua; Du Jiangfeng; Chong Bo

    2011-03-15

    The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.

  2. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  3. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  4. Specific optimization of genetic algorithm on special algebras

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Novak, Vilem; Dyba, Martin; Schenk, Jiri

    2016-06-01

    Searching for complex finite algebras can be succesfully done by the means of genetic algorithm as we showed in former works. This genetic algorithm needs specific optimization of crossover and mutation. We present details about these optimizations which are already implemented in software application for this task - EQCreator.

  5. Multicycle rapid thermal annealing optimization of Mg-implanted GaN: Evolution of surface, optical, and structural properties

    NASA Astrophysics Data System (ADS)

    Greenlee, Jordan D.; Feigelson, Boris N.; Anderson, Travis J.; Tadjer, Marko J.; Hite, Jennifer K.; Mastro, Michael A.; Eddy, Charles R.; Hobart, Karl D.; Kub, Francis J.

    2014-08-01

    The first step of a multi-cycle rapid thermal annealing process was systematically studied. The surface, structure, and optical properties of Mg implanted GaN thin films annealed at temperatures ranging from 900 to 1200 °C were investigated by Raman spectroscopy, photoluminescence, UV-visible spectroscopy, atomic force microscopy, and Nomarski microscopy. The GaN thin films are capped with two layers of in-situ metal organic chemical vapor deposition -grown AlN and annealed in 24 bar of N2 overpressure to avoid GaN decomposition. The crystal quality of the GaN improves with increasing annealing temperature as confirmed by UV-visible spectroscopy and the full widths at half maximums of the E2 and A1 (LO) Raman modes. The crystal quality of films annealed above 1100 °C exceeds the quality of the as-grown films. At 1200 °C, Mg is optically activated, which is determined by photoluminescence measurements. However, at 1200 °C, the GaN begins to decompose as evidenced by pit formation on the surface of the samples. Therefore, it was determined that the optimal temperature for the first step in a multi-cycle rapid thermal anneal process should be conducted at 1150 °C due to crystal quality and surface morphology considerations.

  6. Multicycle rapid thermal annealing optimization of Mg-implanted GaN: Evolution of surface, optical, and structural properties

    SciTech Connect

    Greenlee, Jordan D.; Feigelson, Boris N.; Anderson, Travis J.; Hite, Jennifer K.; Mastro, Michael A.; Eddy, Charles R.; Hobart, Karl D.; Kub, Francis J.; Tadjer, Marko J.

    2014-08-14

    The first step of a multi-cycle rapid thermal annealing process was systematically studied. The surface, structure, and optical properties of Mg implanted GaN thin films annealed at temperatures ranging from 900 to 1200 °C were investigated by Raman spectroscopy, photoluminescence, UV-visible spectroscopy, atomic force microscopy, and Nomarski microscopy. The GaN thin films are capped with two layers of in-situ metal organic chemical vapor deposition -grown AlN and annealed in 24 bar of N{sub 2} overpressure to avoid GaN decomposition. The crystal quality of the GaN improves with increasing annealing temperature as confirmed by UV-visible spectroscopy and the full widths at half maximums of the E{sub 2} and A{sub 1} (LO) Raman modes. The crystal quality of films annealed above 1100 °C exceeds the quality of the as-grown films. At 1200 °C, Mg is optically activated, which is determined by photoluminescence measurements. However, at 1200 °C, the GaN begins to decompose as evidenced by pit formation on the surface of the samples. Therefore, it was determined that the optimal temperature for the first step in a multi-cycle rapid thermal anneal process should be conducted at 1150 °C due to crystal quality and surface morphology considerations.

  7. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  8. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  9. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  10. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  11. Applying fuzzy clustering optimization algorithm to extracting traffic spatial pattern

    NASA Astrophysics Data System (ADS)

    Hu, Chunchun; Shi, Wenzhong; Meng, Lingkui; Liu, Min

    2009-10-01

    Traditional analytical methods for traffic information can't meet to need of intelligent traffic system. Mining value-add information can deal with more traffic problems. The paper exploits a new clustering optimization algorithm to extract useful spatial clustered pattern for predicting long-term traffic flow from macroscopic view. Considering the sensitivity of initial parameters and easy falling into local extreme in FCM algorithm, the new algorithm applies Particle Swarm Optimization method, which can discovery the globe optimal result, to the FCM algorithm. And the algorithm exploits the union of the clustering validity index and objective function of the FCM algorithm as the fitness function of the PSO algorithm. The experimental result indicates that it is effective and efficient. For fuzzy clustering of road traffic data, it can produce useful spatial clustered pattern. And the clustered centers represent the locations which have heavy traffic flow. Moreover, the parameters of the patterns can provide intelligent traffic system with assistant decision support.

  12. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  13. Coupling ant colony optimization and the extended great deluge algorithm for the discrete facility layout problem

    NASA Astrophysics Data System (ADS)

    Nourelfath, M.; Nahas, N.; Montreuil, B.

    2007-12-01

    This article uses a hybrid optimization approach to solve the discrete facility layout problem (FLP), modelled as a quadratic assignment problem (QAP). The idea of this approach design is inspired by the ant colony meta-heuristic optimization method, combined with the extended great deluge (EGD) local search technique. Comparative computational experiments are carried out on benchmarks taken from the QAP-library and from real life problems. The performance of the proposed algorithm is compared to construction and improvement heuristics such as H63, HC63-66, CRAFT and Bubble Search, as well as other existing meta-heuristics developed in the literature based on simulated annealing (SA), tabu search and genetic algorithms (GAs). This algorithm is compared also to other ant colony implementations for QAP. The experimental results show that the proposed ant colony optimization/extended great deluge (ACO/EGD) performs significantly better than the existing construction and improvement algorithms. The experimental results indicate also that the ACO/EGD heuristic methodology offers advantages over other algorithms based on meta-heuristics in terms of solution quality.

  14. Transonic Wing Shape Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.

  15. Experimental signature of programmable quantum annealing.

    PubMed

    Boixo, Sergio; Albash, Tameem; Spedalieri, Federico M; Chancellor, Nicholas; Lidar, Daniel A

    2013-01-01

    Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Current engineered quantum annealing devices have a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. Do they effectively perform classical thermalization when coupled to a decohering thermal environment? Here we present an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization. Our experiment uses groups of eight superconducting flux qubits with programmable spin-spin couplings, embedded on a commercially available chip with >100 functional qubits. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.

  16. Abstract models for the synthesis of optimization algorithms.

    NASA Technical Reports Server (NTRS)

    Meyer, G. G. L.; Polak, E.

    1971-01-01

    Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.

  17. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  18. Fabrication and optimization of transparent conductive films using laser annealing and picosecond laser patterning

    NASA Astrophysics Data System (ADS)

    Lee, Keunhee; Ki, Hyungson

    2017-10-01

    In this article, we propose a systematic method of optimizing the properties of transparent conductive films that possess high electrical conductivity and low optical transparency, by using laser patterning and doping. Prediction maps were constructed, which show the effects of patterning and doping for all possible combinations of initial film conditions (in terms of sheet resistance and transparency) and the degrees of patterning. Using these maps, the properties of transparent conductive films can be easily optimized. We first fabricated graphene-based transparent conductive films on fused silica glass by laser annealing of diamond-like carbon films, and then picosecond laser patterning and doping were successively conducted employing the processing conditions suggested by the maps. For patterning, two types of patterns, circular and square, were considered and prediction maps were separately constructed for both patterns. In this study, a film originally having a sheet resistance of 578 Ω/sq and a transparency of 25% was transformed to a 2823 Ω/sq and 80.6% film when 73% of the film was removed using square patterns and doped by nitric acid. Experimental data agreed well with predicted values.

  19. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  20. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  1. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  2. A Monte Carlo/simulated annealing algorithm for sequential resonance assignment in solid state NMR of uniformly labeled proteins with magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Tycko, Robert; Hu, Kan-Nian

    2010-08-01

    We describe a computational approach to sequential resonance assignment in solid state NMR studies of uniformly 15N, 13C-labeled proteins with magic-angle spinning. As input, the algorithm uses only the protein sequence and lists of 15N/ 13C α crosspeaks from 2D NCACX and NCOCX spectra that include possible residue-type assignments of each crosspeak. Assignment of crosspeaks to specific residues is carried out by a Monte Carlo/simulated annealing algorithm, implemented in the program MC_ASSIGN1. The algorithm tolerates substantial ambiguity in residue-type assignments and coexistence of visible and invisible segments in the protein sequence. We use MC_ASSIGN1 and our own 2D spectra to replicate and extend the sequential assignments for uniformly-labeled HET-s(218-289) fibrils previously determined manually by Siemer et al. (J. Biomol. NMR, 34 (2006) 75-87) from a more extensive set of 2D and 3D spectra. Accurate assignments by MC_ASSIGN1 do not require data that are of exceptionally high quality. Use of MC_ASSIGN1 (and its extensions to other types of 2D and 3D data) is likely to alleviate many of the difficulties and uncertainties associated with manual resonance assignments in solid state NMR studies of uniformly labeled proteins, where spectral resolution and signal-to-noise are often sub-optimal.

  3. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  4. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  5. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  6. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  7. Modernizing quantum annealing using local searches

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas

    2017-02-01

    I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques.

  8. Laser pulse design using optimal control theory-based adaptive simulated annealing technique: vibrational transitions and photo-dissociation

    NASA Astrophysics Data System (ADS)

    Nath, Bikram; Mondal, Chandan Kumar

    2014-08-01

    We have designed and optimised a combined laser pulse using optimal control theory-based adaptive simulated annealing technique for selective vibrational excitations and photo-dissociation. Since proper choice of pulses for specific excitation and dissociation phenomena is very difficult, we have designed a linearly combined pulse for such processes and optimised the different parameters involved in those pulses so that we can get an efficient combined pulse. The technique makes us free from choosing any arbitrary type of pulses and makes a ground to check their suitability. We have also emphasised on how we can improve the performance of simulated annealing technique by introducing an adaptive step length of the different variables during the optimisation processes. We have also pointed out on how we can choose the initial temperature for the optimisation process by introducing heating/cooling step to reduce the annealing steps so that the method becomes cost effective.

  9. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  10. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  11. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  12. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  13. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    7AD-R128 689 A COMPARTIVE STUDY OF OPTIMIZATION ALGORITHMS FOR 1/2 ENGINEERING SYNTHESIS(U) NAVAL POSTGRADUATE SCHOOL I MONTEREY CA C M SPRAGUE...STUDY OF OPTIMIZATION ALGORITHMS FOR ENGINEERING SYNTHESIS by Chester Michael Sprague March 1983 IThesis Advisor: G. Vanderplaats Approved for public... Optimization Master’s Thesis; Algorithms for Engineering Synthesis March 1983 6. PwOmORWjNG ORG. REPONT %UNSER 7. AuTNOto) S. CONTRACT OR GRANT st,7m6CiE(o

  14. Investigation of the optimal annealing temperature for the enhanced thermoelectric properties of MOCVD-grown ZnO films

    NASA Astrophysics Data System (ADS)

    Mahmood, K.; Ali, A.; Arshad, M. I.; Ajaz un Nabi, M.; Amin, N.; Faraz Murtaza, S.; Rabia, S.; Azhar Khan, M.

    2017-04-01

    In this study, we demonstrate the optimization of the annealing temperature for enhanced thermoelectric properties of ZnO. Thin films of ZnO are grown on a sapphire substrate using the metal organic chemical Vapor Deposition (MOCVD) technique. The grown films are annealed in an oxygen environment at 600-1000°C, with a step of 100°C for one hour. Seebeck measurements at room temperature revealed that the Seebeck coefficient of the sample that was not annealed was 152 μV/K, having a carrier concentration of N D 1.46 × 1018 cm-3. The Seebeck coefficient of the annealed films increased from 212 to 415 μV/K up to 900°C and then decreased at 1000°C. The power factor is calculated and found to have an increasing trend with the annealing temperature. This observation is explained by the theory of Johnson and Lark-Horovitz that thermoelectric properties are enhanced by improving the structure of ZnO thin films. The Hall measurements and PL data strongly justify the proposed argument.

  15. Optimization of long-range order in solvent-annealed polystyrene- b-polylactide block polymer thin films for nanolithography

    NASA Astrophysics Data System (ADS)

    Baruth, A.; Seo, M.; Lin, C.-H.; Walster, K.; Shankar, A.; Hillmyer, M. A.; Leighton, C.

    2014-03-01

    We demonstrate long-range order in solvent-annealed polystyrene- b-polylactide block polymer thin films for nanolithographic applications. This is accomplished via climate-controlled solvent vapor annealing, in situ solvent concentration measurements, and small angle x-ray scattering. By connecting the properties of swollen and dried films, we identify ``best practices'' for solvent-annealing, including that exposing block polymer films to a neutral solvent concentration just below the identified (viax-ray scattering) order-disorder transition, at low pressures, with fast solvent evaporation rates, will consistently yield large lateral correlation lengths (>6.9 μm) of hexagonally-packed cylinders that span the entire thickness of the film with center-to-center spacing ranging from 43 - 59 nm. The resultant films have sufficient fidelity for pattern transfer to an inorganic material, as evidenced by patterning of Ni metal nanodots using a damascene-type approach. We argue that our results can be qualitatively understood by analogy to thermal annealing of a single-component solid, where annealing just below the melting point leads to optimal recrystallization. Such reliability, combined with recently developed pattern-transfer techniques, places this cheap and rapid method of nanolithography in competition with conventional lithography schemes. Funded by NSF MRSEC and Creighton University Summer Research Award.

  16. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  17. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  18. Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin

    2017-09-01

    Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.

  19. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  20. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  1. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  2. Global convergence analysis of fast multiobjective gradient-based dose optimization algorithms for high-dose-rate brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S

    2003-03-07

    We consider the problem of the global convergence of gradient-based optimization algorithms for interstitial high-dose-rate (HDR) brachytherapy dose optimization using variance-based objectives. Possible local minima could lead to only sub-optimal solutions. We perform a configuration space analysis using a representative set of the entire non-dominated solution space. A set of three prostate implants is used in this study. We compare the results obtained by conjugate gradient algorithms, two variable metric algorithms and fast-simulated annealing. For the variable metric algorithm BFGS from numerical recipes, large fluctuations are observed. The limited memory L-BFGS algorithm and the conjugate gradient algorithm FRPR are globally convergent. Local minima or degenerate states are not observed. We study the possibility of obtaining a representative set of non-dominated solutions using optimal solution rearrangement and a warm start mechanism. For the surface and volume dose variance and their derivatives, a method is proposed which significantly reduces the number of required operations. The optimization time, ignoring a preprocessing step, is independent of the number of sampling points in the planning target volume. Multiobjective dose optimization in HDR brachytherapy using L-BFGS and a new modified computation method for the objectives and derivatives has been accelerated, depending on the number of sampling points, by a factor in the range 10-100.

  3. Fast-convergence superpixel algorithm via an approximate optimization

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2016-09-01

    We propose an optimization scheme that achieves fast yet accurate computation of superpixels from an image. Our optimization is designed to improve the efficiency and robustness for the minimization of a composite energy functional in the expectation-minimization (EM) framework where we restrict the update of an estimate to avoid redundant computations. We consider a superpixel energy formulation that consists of L2-norm for the spatial regularity and L1-norm for the data fidelity in the demonstration of the robustness of the proposed algorithm. The quantitative and qualitative evaluations indicate that our superpixel algorithm outperforms SLIC and SEEDS algorithms. It is also demonstrated that our algorithm guarantees the convergence with less computational cost by up to 89% on average compared to the SLIC algorithm while preserving the accuracy. Our optimization scheme can be easily extended to other applications in which the alternating minimization is applicable in the EM framework.

  4. Path Optimization for Single and Multiple Searchers: Models and Algorithms

    DTIC Science & Technology

    2008-09-01

    the k-th it- eration of Algorithm 11, the master problem MP4 (k) defined below is solved. The optimal value and optimal solution of MP4 (k) are denoted z...k) and y(k), respectively. In each iteration of Algorithm 11, U cuts are generated at once. Formulation of Master problem : MP4 (k) min z = ∑U u=1...master problem MP4 (k), and obtain its optimal value z(k) and optimal solution y(k). If z(k) > q, then q = z(k). Step 3. Calculate fu(y (k)) and fu(y (k

  5. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  6. An algorithm for the systematic disturbance of optimal rotational solutions

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kaiser, Mary K.

    1989-01-01

    An algorithm for introducing a systematic rotational disturbance into an optimal (i.e., single axis) rotational trajectory is described. This disturbance introduces a motion vector orthogonal to the quaternion-defined optimal rotation axis. By altering the magnitude of this vector, the degree of non-optimality can be controlled. The metric properties of the distortion parameter are described, with analogies to two-dimensional translational motion. This algorithm was implemented in a motion-control program on a three-dimensional graphic workstation. It supports a series of human performance studies on the detectability of rotational trajectory optimality by naive observers.

  7. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  8. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  9. PCB drill path optimization by combinatorial cuckoo search algorithm.

    PubMed

    Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process.

  10. PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm

    PubMed Central

    Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198

  11. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  12. Superscattering of light optimized by a genetic algorithm

    SciTech Connect

    Mirzaei, Ali Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.

    2014-07-07

    We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.

  13. Shape Optimization of Cochlear Implant Electrode Array Using Genetic Algorithms

    DTIC Science & Technology

    2007-11-02

    Shape Optimization of Cochlear Implant Electrode Array using Genetic Algorithms Charles T.M. Choi, Ph.D., senior member, IEEE Department of...c.t.choi@ieee.org Abstract−Finite element analysis is used to compute the current distribution of the human cochlea during cochlear implant electrical...stimulation. Genetic algorithms are then applied in conjunction with the finite element analysis to optimize the shape of cochlear implant electrode array

  14. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  15. Differential evolution algorithm for global optimizations in nuclear physics

    NASA Astrophysics Data System (ADS)

    Qi, Chong

    2017-04-01

    We explore the applicability of the differential evolution algorithm in finding the global minima of three typical nuclear structure physics problems: the global deformation minimum in the nuclear potential energy surface, the optimization of mass model parameters and the lowest eigenvalue of a nuclear Hamiltonian. The algorithm works very effectively and efficiently in identifying the minima in all problems we have tested. We also show that the algorithm can be parallelized in a straightforward way.

  16. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  17. Relaxed controls and the convergence of optimal control algorithms

    NASA Technical Reports Server (NTRS)

    Williamson, L. J.; Polak, E.

    1976-01-01

    This paper presents a framework for the study of the convergence properties of optimal control algorithms and illustrates its use by means of two examples. The framework consists of an algorithm prototype with a convergence theorem, together with some results in relaxed controls theory.

  18. Applying new optimization algorithms to more predictive control

    SciTech Connect

    Wright, S.J.

    1996-03-01

    The connections between optimization and control theory have been explored by many researchers and optimization algorithms have been applied with success to optimal control. The rapid pace of developments in model predictive control has given rise to a host of new problems to which optimization has yet to be applied. Concurrently, developments in optimization, and especially in interior-point methods, have produced a new set of algorithms that may be especially helpful in this context. In this paper, we reexamine the relatively simple problem of control of linear processes subject to quadratic objectives and general linear constraints. We show how new algorithms for quadratic programming can be applied efficiently to this problem. The approach extends to several more general problems in straightforward ways.

  19. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  20. Automated medial axis seeding and guided evolutionary simulated annealing for optimization of gamma knife radiosurgery treatment plans

    NASA Astrophysics Data System (ADS)

    Zhang, Pengpeng

    The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to

  1. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  2. Global search algorithm for optimal control

    NASA Technical Reports Server (NTRS)

    Brocker, D. H.; Kavanaugh, W. P.; Stewart, E. C.

    1970-01-01

    Random-search algorithm employs local and global properties to solve two-point boundary value problem in Pontryagin maximum principle for either fixed or variable end-time problems. Mixed boundary value problem is transformed to an initial value problem. Mapping between initial and terminal values utilizes hybrid computer.

  3. Speech Algorithm Optimization at 16 KBPS.

    DTIC Science & Technology

    1980-09-30

    spectrum. However, this original ATC algorithm, as proposed by Zelinski and Noll, suffers from a " burbling " characteristic at lower data rates. To...transmission of the LPC and pitch parameters does in fact remove the " burbling " sound and improve the ]overall signal-to-noise ratio. Figure F-2

  4. Optimization of deep learning algorithms for object classification

    NASA Astrophysics Data System (ADS)

    Horváth, András.

    2017-02-01

    Deep learning is currently the state of the art algorithm for image classification. The complexity of these feedforward neural networks have overcome a critical point, resulting algorithmic breakthroughs in various fields. On the other hand their complexity makes them executable in tasks, where High-throughput computing powers are available. The optimization of these networks -considering computational complexity and applicability on embedded systems- has not yet been studied and investigated in details. In this paper I show some examples how this algorithms can be optimized and accelerated on embedded systems.

  5. Evolutionary algorithms for multiobjective and multimodal optimization of diagnostic schemes.

    PubMed

    de Toro, Francisco; Ros, Eduardo; Mota, Sonia; Ortega, Julio

    2006-02-01

    This paper addresses the optimization of noninvasive diagnostic schemes using evolutionary algorithms in medical applications based on the interpretation of biosignals. A general diagnostic methodology using a set of definable characteristics extracted from the biosignal source followed by the specific diagnostic scheme is presented. In this framework, multiobjective evolutionary algorithms are used to meet not only classification accuracy but also other objectives of medical interest, which can be conflicting. Furthermore, the use of both multimodal and multiobjective evolutionary optimization algorithms provides the medical specialist with different alternatives for configuring the diagnostic scheme. Some application examples of this methodology are described in the diagnosis of a specific cardiac disorder-paroxysmal atrial fibrillation.

  6. Automated discrete element method calibration using genetic and optimization algorithms

    NASA Astrophysics Data System (ADS)

    Do, Huy Q.; Aragón, Alejandro M.; Schott, Dingena L.

    2017-06-01

    This research aims at developing a universal methodology for automated calibration of microscopic properties of modelled granular materials. The proposed calibrator can be applied for different experimental set-ups. Two optimization approaches: (1) a genetic algorithm and (2) DIRECT optimization, are used to identify discrete element method input model parameters, e.g., coefficients of sliding and rolling friction. The algorithms are used to minimize the objective function characterized by the discrepancy between the experimental macroscopic properties and the associated numerical results. Two test cases highlight the robustness, stability, and reliability of the two algorithms used for automated discrete element method calibration with different set-ups.

  7. Imperialist competitive algorithm combined with chaos for global optimization

    NASA Astrophysics Data System (ADS)

    Talatahari, S.; Farahmand Azar, B.; Sheikholeslami, R.; Gandomi, A. H.

    2012-03-01

    A novel chaotic improved imperialist competitive algorithm (CICA) is presented for global optimization. The ICA is a new meta-heuristic optimization developed based on a socio-politically motivated strategy and contains two main steps: the movement of the colonies and the imperialistic competition. Here different chaotic maps are utilized to improve the movement step of the algorithm. Seven different chaotic maps are investigated and the Logistic and Sinusoidal maps are found as the best choices. Comparing the new algorithm with the other ICA-based methods demonstrates the superiority of the CICA for the benchmark functions.

  8. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  9. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  10. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.

  11. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  12. Simulated quantum annealing of double-well and multiwell potentials.

    PubMed

    Inack, E M; Pilati, S

    2015-11-01

    We analyze the performance of quantum annealing as a heuristic optimization method to find the absolute minimum of various continuous models, including landscapes with only two wells and also models with many competing minima and with disorder. The simulations performed using a projective quantum Monte Carlo (QMC) algorithm are compared with those based on the finite-temperature path-integral QMC technique and with classical annealing. We show that the projective QMC algorithm is more efficient than the finite-temperature QMC technique, and that both are inferior to classical annealing if this is performed with appropriate long-range moves. However, as the difficulty of the optimization problem increases, classical annealing loses efficiency, while the projective QMC algorithm keeps stable performance and is finally the most effective optimization tool. We discuss the implications of our results for the outstanding problem of testing the efficiency of adiabatic quantum computers using stochastic simulations performed on classical computers.

  13. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  14. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  15. Artificial bee colony algorithm for solving optimal power flow problem.

    PubMed

    Le Dinh, Luong; Vo Ngoc, Dieu; Vasant, Pandian

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem.

  16. A general Monte Carlo/simulated annealing algorithm for resonance assignment in NMR of uniformly labeled biopolymers

    PubMed Central

    Hu, Kan-Nian; Qiang, Wei; Tycko, Robert

    2011-01-01

    We describe a general computational approach to site-specific resonance assignments in multidimensional NMR studies of uniformly 15N,13C-labeled biopolymers, based on a simple Monte Carlo/simulated annealing (MCSA) algorithm contained in the program MCASSIGN2. Input to MCASSIGN2 includes lists of multidimensional signals in the NMR spectra with their possible residue-type assignments (which need not be unique), the biopolymer sequence, and a table that describes the connections that relate one signal list to another. As output, MCASSIGN2 produces a high-scoring sequential assignment of the multidimensional signals, using a score function that rewards good connections (i.e., agreement between relevant sets of chemical shifts in different signal lists) and penalizes bad connections, unassigned signals, and assignment gaps. Examination of a set of high-scoring assignments from a large number of independent runs allows one to determine whether a unique assignment exists for the entire sequence or parts thereof. We demonstrate the MCSA algorithm using two-dimensional (2D) and three-dimensional (3D) solid state NMR spectra of several model protein samples (α-spectrin SH3 domain and protein G/B1 microcrystals, HET-s218–289 fibrils), obtained with magic-angle spinning and standard polarization transfer techniques. The MCSA algorithm and MCASSIGN2 program can accommodate arbitrary combinations of NMR spectra with arbitrary dimensionality, and can therefore be applied in many areas of solid state and solution NMR. PMID:21710190

  17. Optimizing Variational Quantum Algorithms Using Pontryagin's Minimum Principle

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Rahmani, Armin; Shabani, Alireza; Neven, Hartmut; Chamon, Claudio

    2017-04-01

    We use Pontryagin's minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale of the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Furthermore, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.

  18. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  19. A Discrete Lagrangian Algorithm for Optimal Routing Problems

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    The ideas of discrete Lagrangian methods for conservative systems are exploited for the construction of algorithms applicable in optimal ship routing problems. The algorithm presented here is based on the discretisation of Hamilton's principle of stationary action Lagrangian and specifically on the direct discretization of the Lagrange-Hamilton principle for a conservative system. Since, in contrast to the differential equations, the discrete Euler-Lagrange equations serve as constrains for the optimization of a given cost functional, in the present work we utilize this feature in order to minimize the cost function for optimal ship routing.

  20. Optimal Configuration of a Square Array Group Testing Algorithm

    PubMed Central

    Hudgens, Michael G.; Kim, Hae-Young

    2009-01-01

    We consider the optimal configuration of a square array group testing algorithm (denoted A2) to minimize the expected number of tests per specimen. For prevalence greater than 0.2498, individual testing is shown to be more efficient than A2. For prevalence less than 0.2498, closed form lower and upper bounds on the optimal group sizes for A2 are given. Arrays of dimension 2 × 2, 3 × 3, and 4 × 4 are shown to never be optimal. The results are illustrated by considering the design of a specimen pooling algorithm for detection of recent HIV infections in Malawi. PMID:21218195

  1. Adaptive MANET multipath routing algorithm based on the simulated annealing approach.

    PubMed

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  2. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    PubMed Central

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes. PMID:25032241

  3. Multidisciplinary Optimization of Airborne Radome Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Tang, Xinggang; Zhang, Weihong; Zhu, Jihong

    A multidisciplinary optimization scheme of airborne radome is proposed. The optimization procedure takes into account the structural and the electromagnetic responses simultaneously. The structural analysis is performed with the finite element method using Patran/Nastran, while the electromagnetic analysis is carried out using the Plane Wave Spectrum and Surface Integration technique. The genetic algorithm is employed for the multidisciplinary optimization process. The thicknesses of multilayer radome wall are optimized to maximize the overall transmission coefficient of the antenna-radome system under the constraint of the structural failure criteria. The proposed scheme and the optimization approach are successfully assessed with an illustrative numerical example.

  4. OPTIMIZATION OF LONG RURAL FEEDERS USING A GENETIC ALGORITHM

    SciTech Connect

    Wishart, Michael; Ledwich, Gerard; Ghosh, Arindam; Ivanovich, Grujica

    2010-06-15

    This paper describes the optimization of conductor size and the voltage regulator location and magnitude of long rural distribution lines. The optimization minimizes the lifetime cost of the lines, including capital costs and losses while observing voltage drop and operational constraints using a Genetic Algorithm (GA). The GA optimization is applied to a real Single Wire Earth Return (SWER) network in regional Queensland and results are presented.

  5. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  6. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  7. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  8. A Hybrid Ant Colony Algorithm for Loading Pattern Optimization

    NASA Astrophysics Data System (ADS)

    Hoareau, F.

    2014-06-01

    Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.

  9. Algorithm and performance of a clinical IMRT beam-angle optimization system.

    PubMed

    Djajaputra, David; Wu, Qiuwen; Wu, Yan; Mohan, Radhe

    2003-10-07

    This paper describes the algorithm and examines the performance of an intensity-modulated radiation therapy (IMRT) beam-angle optimization (BAO) system. In this algorithm successive sets of beam angles are selected from a set of predefined directions using a fast simulated annealing (FSA) algorithm. An IMRT beam-profile optimization is performed on each generated set of beams. The IMRT optimization is accelerated by using a fast dose calculation method that utilizes a precomputed dose kernel. A compact kernel is constructed for each of the predefined beams prior to starting the FSA algorithm. The IMRT optimizations during the BAO are then performed using these kernels in a fast dose calculation engine. This technique allows the IMRT optimization to be performed more than two orders of magnitude faster than a similar optimization that uses a convolution dose calculation engine. Any type of optimization criterion present in the IMRT system can be used in this BAO system. An objective function based on clinically-relevant dose-volume (DV) criteria is used in this study. This facilitates the comparison between a BAO plan and the corresponding plan produced by a planner since the latter is usually optimized using a DV-based objective function. A simple prostate case and a complex head-and-neck (HN) case were used to evaluate the usefulness and performance of this BAO method. For the prostate case we compared the BAO results for three, five and seven coplanar beams with those of the same number of equispaced coplanar beams. For the HN case we compare the BAO results for seven and nine non-coplanar beams with that for nine equispaced coplanar beams. In each case the BAO algorithm was allowed to search up to 1000 different sets of beams. The BAO for the prostate cases were finished in about 1-2 h on a moderate 400 MHz workstation while that for the head-and-neck cases were completed in 13-17 h on a 750 MHz machine. No a priori beam-selection criteria have been used in

  10. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering.

    PubMed

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  11. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  12. Optical network unit placement in Fiber-Wireless (FiWi) access network by Moth-Flame optimization algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Puja; Prakash, Shashi

    2017-07-01

    Hybrid wireless-optical broadband access network (WOBAN) or Fiber-Wireless (FiWi) is the integration of wireless access network and optical network. This hybrid multi-domain network adopts the advantages of wireless and optical domains and serves the demand of technology savvy users. FiWi exhibits the properties of cost effectiveness, robustness, flexibility, high capacity, reliability and is self organized. Optical Network Unit (ONU) placement problem in FiWi contributes in simplifying the network design and enhances the performance in terms of cost efficiency and increased throughput. Several individual-based algorithms, such as Simulated Annealing (SA), Tabu Search, etc. have been suggested for ONU placement, but these algorithms suffer from premature convergence (trapping in a local optima). The present research work undertakes the deployment of FiWi and proposes a novel nature-inspired heuristic paradigm called Moth-Flame optimization (MFO) algorithm for multiple optical network units' placement. MFO is a population based algorithm. Population-based algorithms are better in handling local optima avoidance. The simulation results are compared with the existing Greedy and Simulated Annealing algorithms to optimize the position of ONUs. To the best of our knowledge, MFO algorithm has been used for the first time in this domain, moreover it has been able to provide very promising and competitive results. The performance of MFO algorithm has been analyzed by varying the 'b' parameter. MFO algorithm results in faster convergence than the existing strategies of Greedy and SA and returns a lower value of overall cost function. The results exhibit the dependence of the objective function on the distribution of wireless users also.

  13. Compressive Sensing Image Fusion Based on Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Li, X.; Lv, J.; Jiang, S.; Zhou, H.

    2017-09-01

    In order to solve the problem that the spatial matching is difficult and the spectral distortion is large in traditional pixel-level image fusion algorithm. We propose a new method of image fusion that utilizes HIS transformation and the recently developed theory of compressive sensing that is called HIS-CS image fusion. In this algorithm, the particle swarm optimization algorithm is used to select the fusion coefficient ω. In the iterative process, the image fusion coefficient ω is taken as particle, and the optimal value is obtained by combining the optimal objective function. Then we use the compression-aware weighted fusion algorithm for remote sensing image fusion, taking the coefficient ω as the weight value. The algorithm ensures the optimal selection of fusion effect with a certain degree of self-adaptability. To evaluate the fused images, this paper uses five kinds of index parameters such as Entropy, Standard Deviation, Average Gradient, Degree of Distortion and Peak Signal-to-Noise Ratio. The experimental results show that the image fusion effect of the algorithm in this paper is better than that of traditional methods.

  14. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  15. A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms

    PubMed Central

    Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  16. Sequential unconstrained minimization algorithms for constrained optimization

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  17. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  18. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  19. Multimodal optimization using a bi-objective evolutionary algorithm.

    PubMed

    Deb, Kalyanmoy; Saha, Amit

    2012-01-01

    In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.

  20. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm.

    PubMed

    Wang, Jiaxi; Lin, Boliang; Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.

  1. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  2. Benchmarking derivative-free optimization algorithms.

    SciTech Connect

    More', J. J.; Wild, S. M.; Mathematics and Computer Science; Cornell Univ.

    2009-01-01

    We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.

  3. A multi-group firefly algorithm for numerical optimization

    NASA Astrophysics Data System (ADS)

    Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun

    2017-08-01

    To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.

  4. Optimization of computer-generated binary holograms using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Cojoc, Dan; Alexandrescu, Adrian

    1999-11-01

    The aim of this paper is to compare genetic algorithms against direct point oriented coding in the design of binary phase Fourier holograms, computer generated. These are used as fan-out elements for free space optical interconnection. Genetic algorithms are optimization methods which model the natural process of genetic evolution. The configuration of the hologram is encoded to form a chromosome. To start the optimization, a population of different chromosomes randomly generated is considered. The chromosomes compete, mate and mutate until the best chromosome is obtained according to a cost function. After explaining the operators that are used by genetic algorithms, this paper presents two examples with 32 X 32 genes in a chromosome. The crossover type and the number of mutations are shown to be important factors which influence the convergence of the algorithm. GA is demonstrated to be a useful tool to design namely binary phase holograms of complicate structures.

  5. Optimized Algorithms for Prediction within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; SunSpiral, Vytas; Allan, Mark B.

    2006-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Laboratory at NASA Johnson Space Center serves as a testbed for human-robot collaboration research and development efforts. One of the primary efforts investigates how adjustable autonomy can provide for a safe and more effective completion of manipulation-based tasks. A predictive algorithm developed in previous work was deployed as part of a software interface that can be used for long-distance tele-operation. In this paper we provide the details of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmic approach. We show that all of the algorithms presented can be optimized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. Judicious feature selection also plays a significant role in the conclusions drawn.

  6. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  7. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  8. A New Magnetotactic Bacteria Optimization Algorithm Based on Moment Migration.

    PubMed

    Mo, Hongwei; Liu, Lili; Zhao, Jiao

    2017-01-01

    Magnetotactic bacteria is a kind of polyphyletic group of prokaryotes with the characteristics of magnetotaxis that make them orient and swim along geomagnetic field lines. Its distinct biology characteristics are useful to design new optimization technology. In this paper, a new bionic optimization algorithm named Magnetotactic Bacteria Moment Migration Algorithm (MBMMA) is proposed. In the proposed algorithm, the moments of a chain of magnetosomes are considered as solutions. The moments of relative good solutions can migrate each other to enhance the diversity of the MBMMA. It is compared with variants of PSO on standard functions problems. The experiment results show that the MBMMA is effective in solving optimization problems. It shows better or competitive performance compared with the variants of PSO on most of the tested functions in this paper.

  9. An algorithm for optimal structural design with frequency constraints

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.; Shaw, R. C. J.

    1978-01-01

    The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.

  10. Swarm algorithms with chaotic jumps for optimization of multimodal functions

    NASA Astrophysics Data System (ADS)

    Krohling, Renato A.; Mendel, Eduardo; Campos, Mauro

    2011-11-01

    In this article, the use of some well-known versions of particle swarm optimization (PSO) namely the canonical PSO, the bare bones PSO (BBPSO) and the fully informed particle swarm (FIPS) is investigated on multimodal optimization problems. A hybrid approach which consists of swarm algorithms combined with a jump strategy in order to escape from local optima is developed and tested. The jump strategy is based on the chaotic logistic map. The hybrid algorithm was tested for all three versions of PSO and simulation results show that the addition of the jump strategy improves the performance of swarm algorithms for most of the investigated optimization problems. Comparison with the off-the-shelf PSO with local topology (l best model) has also been performed and indicates the superior performance of the standard PSO with chaotic jump over the standard both using local topology (l best model).

  11. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  12. Multi-objective Optimization on Helium Liquefier Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, H. R.; Xiong, L. Y.; Peng, N.; Meng, Y. R.; Liu, L. Q.

    2017-02-01

    Research on optimization of helium liquefier is limited at home and abroad, and most of the optimization is single-objective based on Collins cycle. In this paper, a multi-objective optimization is conducted using genetic algorithm (GA) on the 40 L/h helium liquefier developed by Technical Institute of Physics and Chemistry of the Chinese Academy of Science (TIPC, CAS), steady solutions are obtained in the end. In addition, the exergy loss of the optimized system is studied in the case of with and without liquid nitrogen pre-cooling. The results have guiding significance for the future design of large helium liquefier.

  13. Optimal Design of RF Energy Harvesting Device Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Mori, T.; Sato, Y.; Adriano, R.; Igarashi, H.

    2015-11-01

    This paper presents optimal design of an RF energy harvesting device using genetic algorithm (GA). In the present RF harvester, a planar spiral antenna (PSA) is loaded with matching and rectifying circuits. On the first stage of the optimal design, the shape parameters of PSA are optimized using . Then, the equivalent circuit of the optimized PSA is derived for optimization of the circuits. Finally, the parameters of RF energy harvesting circuit are optimized to maximize the output power using GA. It is shown that the present optimization increases the output power by a factor of five. The manufactured energy harvester starts working when the input electric field is greater than 0.5 V/m.

  14. A limited-memory algorithm for bound-constrained optimization

    SciTech Connect

    Byrd, R.H.; Peihuang, L.; Nocedal, J. |

    1996-03-01

    An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited-memory BFGS matrix to approximate the Hessian of the objective function. We show how to take advantage of the form of the limited-memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.

  15. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    SciTech Connect

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  16. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization

    PubMed Central

    Zhu, Zhouquan

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879

  17. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  18. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  19. Multifrequency and multidirection optimizations of antenna arrays using heuristic algorithms and the multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Önol, Can; Alkış, Sena; Gökçe, Özer; Ergül, Özgür

    2016-07-01

    We consider fast and efficient optimizations of arrays involving three-dimensional antennas with arbitrary shapes and geometries. Heuristic algorithms, particularly genetic algorithms, are used for optimizations, while the required solutions are carried out accurately and efficiently via the multilevel fast multipole algorithm (MLFMA). The superposition principle is employed to reduce the number of MLFMA solutions to the number of array elements per frequency. The developed mechanism is used to optimize arrays for multifrequency and/or multidirection operations, i.e., to find the most suitable set of antenna excitations for desired radiation characteristics simultaneously at different frequencies and/or directions. The capabilities of the optimization environment are demonstrated on arrays of bowtie and Vivaldi antennas.

  20. [Optimizing algorithm design of piecewise linear classifier for spectra].

    PubMed

    Lan, Tian-Ge; Fang, Yong-Hua; Xiong, Wei; Kong, Chao; Li, Da-Cheng; Dong, Da-Ming

    2008-11-01

    Being able to identify pollutant gases quickly and accurately is a basic request of spectroscopic technique for envirment monitoring for spectral classifier. Piecewise linear classifier is simple needs less computational time and approachs nonlinear boundary beautifully. Combining piecewise linear classifier and linear support vector machine which is based on the principle of maximizing margin, an optimizing algorithm for single side piecewise linear classifier was devised. Experimental results indicate that the piecewise linear classifier trained by the optimizing algorithm proposed in this paper can approach nonolinear boundary with fewer super_planes and has higher veracity for classification and recognition.

  1. Shape optimization of rubber bushing using differential evolution algorithm.

    PubMed

    Kaya, Necmettin

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm.

  2. New near-optimal feedback guidance algorithms for space missions

    NASA Astrophysics Data System (ADS)

    Hawkins, Matthew Jay

    This dissertation describes several different spacecraft guidance algorithms, with applications including asteroid intercept and rendezvous, planetary landing, and orbital transfer. A comprehensive review of spacecraft guidance algorithms for asteroid intercept and rendezvous. Zero-Effort-Miss/Zero-Effort-Velocity (ZEM/ZEV) guidance is introduced and applied to asteroid intercept and rendezvous, and to a wealth of different example problems, including missile intercept, planetary landing, and orbital transfer. It is seen that the ZEM/ZEV guidance law can be used in many different scenarios, and that it provides near-optimal performance where an analytical optimal guidance law does not exist, such as in a non-linear gravity field.

  3. Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm

    PubMed Central

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848

  4. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  5. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  6. Auto-score system to optimize OPC recipe parameters using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Asthana, Abhishek; Ning, Guoxiang; Feng, Jui-Hsuan; Zhang, Jie; Wilkinson, William

    2016-10-01

    The ever increasing pattern densities and design complexities make the tuning of optical proximity correction (OPC) recipes more challenging. There are various recipe tuning methods to meet the challenge, such as genetic algorithm (GA), simulated annealing, and OPC software vendor provided recipe optimizers. However, these methodologies usually only consider edge placement errors (EPEs). Therefore, these techniques may not provide adequate freedom to solve unique problems at special geometries, for example bridge, pinch, and process variation band related violations at complex 2D geometries. This paper introduces a general methodology to fix specific problems identified at the OPC verification stage and demonstrates its successful application to two test-cases. The algorithm and method of the automatic scoring system is introduced in order to identify and prioritize the problems that need to be fixed based on severity, with the POR recipe score used as the baseline reference. A GA optimizer, whose objective function is based on the scoring system, is applied to tune the OPC recipe parameters to optimum condition after generations of selections. The GA optimized recipe would be compared to existing recipe to quantify the amount of improvement. This technique was subsequently applied to eliminate certain chronic OPC verification problems which were encountered in the past. Though the benefits have been demonstrated for limited test cases, employing this technique more universally will enable users to efficiently reduce the number of OPC verification violations and provide robust OPC solutions.

  7. Effective and efficient algorithm for multiobjective optimization of hydrologic models

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Gupta, Hoshin V.; Bastidas, Luis A.; Bouten, Willem; Sorooshian, Soroosh

    2003-08-01

    Practical experience with the calibration of hydrologic models suggests that any single-objective function, no matter how carefully chosen, is often inadequate to properly measure all of the characteristics of the observed data deemed to be important. One strategy to circumvent this problem is to define several optimization criteria (objective functions) that measure different (complementary) aspects of the system behavior and to use multicriteria optimization to identify the set of nondominated, efficient, or Pareto optimal solutions. In this paper, we present an efficient and effective Markov Chain Monte Carlo sampler, entitled the Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm, which is capable of solving the multiobjective optimization problem for hydrologic models. MOSCEM is an improvement over the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm, using the concept of Pareto dominance (rather than direct single-objective function evaluation) to evolve the initial population of points toward a set of solutions stemming from a stable distribution (Pareto set). The efficacy of the MOSCEM-UA algorithm is compared with the original MOCOM-UA algorithm for three hydrologic modeling case studies of increasing complexity.

  8. Optimization Algorithm for the Generation of ONCV Pseudopotentials

    NASA Astrophysics Data System (ADS)

    Schlipf, Martin; Gygi, Francois

    2015-03-01

    We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z=83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials provide a good agreement with the all-electron results obtained using the FLEUR code with a plane-wave energy cutoff of approximately 60 Ry. Supported by DOE/BES Grant DE-SC0008938.

  9. Optimization algorithm for the generation of ONCV pseudopotentials

    NASA Astrophysics Data System (ADS)

    Schlipf, Martin; Gygi, François

    2015-11-01

    We present an optimization algorithm to construct pseudopotentials and use it to generate a set of Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials for elements up to Z = 83 (Bi) (excluding Lanthanides). We introduce a quality function that assesses the agreement of a pseudopotential calculation with all-electron FLAPW results, and the necessary plane-wave energy cutoff. This quality function allows us to use a Nelder-Mead optimization algorithm on a training set of materials to optimize the input parameters of the pseudopotential construction for most of the periodic table. We control the accuracy of the resulting pseudopotentials on a test set of materials independent of the training set. We find that the automatically constructed pseudopotentials

  10. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  11. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    NASA Astrophysics Data System (ADS)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  12. A Global Optimization Algorithm Using Stochastic Differential Equations.

    DTIC Science & Technology

    1985-02-01

    Bari (Italy).2Istituto di Fisica , 2 UniversitA di Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La Romanina) Roma (Italy). 3Istituto di Matematica ...accompanying Algorithm. lDipartininto di Matematica , Universita di Bari, 70125 Bar (Italy). Istituto di Fisica , 2a UniversitA di Roim ’"Tor Vergata", Via...Optimization, Stochastic Differential Equations Work Unit Number 5 (Optimization and Large Scale Systems) 6Dipartimento di Matematica , Universita di Bari, 70125

  13. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    NASA Astrophysics Data System (ADS)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  14. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  15. Optimization of Power Coefficient of Wind Turbine Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rajakumar, Sappani; Ravindran, Durairaj; Sivakumar, Mahalingam; Venkatachalam, Gopalan; Muthukumar, Shunmugavelu

    2016-06-01

    In the design of a wind turbine, the goal is to attain the highest possible power output under specified atmospheric conditions. The optimization of power coefficient of horizontal axis wind turbine has been carried out by integration of blade element momentum method and genetic algorithm (GA). The design variables considered are wind velocity, angle of attack and tip speed ratio. The objective function is power coefficient of wind turbine. The different combination of design variables are optimized using GA and then the Power coefficient is optimized. The optimized design variables are validated with the experimental results available in the literature. By this optimization work the optimum design variables of wind turbine can be found economically than experimental work. NACA44XX series airfoils are considered for this optimization work.

  16. Optimization of Power Coefficient of Wind Turbine Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Rajakumar, Sappani; Ravindran, Durairaj; Sivakumar, Mahalingam; Venkatachalam, Gopalan; Muthukumar, Shunmugavelu

    2017-04-01

    In the design of a wind turbine, the goal is to attain the highest possible power output under specified atmospheric conditions. The optimization of power coefficient of horizontal axis wind turbine has been carried out by integration of blade element momentum method and genetic algorithm (GA). The design variables considered are wind velocity, angle of attack and tip speed ratio. The objective function is power coefficient of wind turbine. The different combination of design variables are optimized using GA and then the Power coefficient is optimized. The optimized design variables are validated with the experimental results available in the literature. By this optimization work the optimum design variables of wind turbine can be found economically than experimental work. NACA44XX series airfoils are considered for this optimization work.

  17. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  18. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  19. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  20. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  1. Constrained Multiobjective Optimization Algorithm Based on Immune System Model.

    PubMed

    Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong

    2016-09-01

    An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite.

  2. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  3. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  4. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    EPA Science Inventory

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  5. Environmental Optimization Using the WAste Reduction Algorithm (WAR)

    EPA Science Inventory

    Traditionally chemical process designs were optimized using purely economic measures such as rate of return. EPA scientists developed the WAste Reduction algorithm (WAR) so that environmental impacts of designs could easily be evaluated. The goal of WAR is to reduce environme...

  6. Optimal pulse shaping for coherent control by the penalty algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Hai; Dussault, Jean-Pièrre; Bandrauk, André D.

    1994-04-01

    We use penalty methods coupled with unitary exponential operator methods to solve the optimal control problem for molecular time-dependent Schrödinger equations involving laser pulse excitations. A stable numerical algorithm is presented which propagates directly from initial states to given final states. Results are reported for an analytically solvable model for the complete inversion of a three-state system.

  7. Numerical Optimization Algorithms and Software for Systems Biology

    SciTech Connect

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  8. Optimization algorithms for large-scale multireservoir hydropower systems

    SciTech Connect

    Hiew, K.L.

    1987-01-01

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another. The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.

  9. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  10. Enhancing particle swarm optimization algorithm using two new strategies for optimizing design of truss structures

    NASA Astrophysics Data System (ADS)

    Lu, Y. C.; Jan, J. C.; Hung, S. L.; Hung, G. H.

    2013-10-01

    This work develops an augmented particle swarm optimization (AugPSO) algorithm using two new strategies,: boundary-shifting and particle-position-resetting. The purpose of the algorithm is to optimize the design of truss structures. Inspired by a heuristic, the boundary-shifting approach forces particles to move to the boundary between feasible and infeasible regions in order to increase the convergence rate in searching. The purpose of the particle-position-resetting approach, motivated by mutation scheme in genetic algorithms (GAs), is to increase the diversity of particles and to prevent the solution of particles from falling into local minima. The performance of the AugPSO algorithm was tested on four benchmark truss design problems involving 10, 25, 72 and 120 bars. The convergence rates and final solutions achieved were compared among the simple PSO, the PSO with passive congregation (PSOPC) and the AugPSO algorithms. The numerical results indicate that the new AugPSO algorithm outperforms the simple PSO and PSOPC algorithms. The AugPSO achieved a new and superior optimal solution to the 120-bar truss design problem. Numerical analyses showed that the AugPSO algorithm is more robust than the PSO and PSOPC algorithms.

  11. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  12. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  13. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  14. Model updating based on an affine scaling interior optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Y. X.; Jia, C. X.; Li, Jian; Spencer, B. F.

    2013-11-01

    Finite element model updating is usually considered as an optimization process. Affine scaling interior algorithms are powerful optimization algorithms that have been developed over the past few years. A new finite element model updating method based on an affine scaling interior algorithm and a minimization of modal residuals is proposed in this article, and a general finite element model updating program is developed based on the proposed method. The performance of the proposed method is studied through numerical simulation and experimental investigation using the developed program. The results of the numerical simulation verified the validity of the method. Subsequently, the natural frequencies obtained experimentally from a three-dimensional truss model were used to update a finite element model using the developed program. After updating, the natural frequencies of the truss and finite element model matched well.

  15. Endgame implementations for the Efficient Global Optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Kaanta, Bryan

    2009-05-01

    Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency (fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We also report results using a different cost function than the one previously used [2,3].

  16. Optimization of reinforced soil embankments by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ponterosso, P.; Fox, D. St. J.

    2000-04-01

    A Genetic Algorithm (GA) is described, which produces solutions to the cost optimization problem of reinforcement layout for reinforced soil slopes. These solutions incorporate different types of reinforcement within a single slope. The GA described is implemented with the aim of optimizing the cost of materials for the preliminary layout of reinforced soil embankments. The slope design method chosen is the U.K. Department of Transport HA 68/94 Design Methods for the Reinforcement of Highway Slopes by Reinforced Soil and Soil Nailing Techniques. The results confirm that there is a role for the GA in optimization of reinforced soil design.

  17. Optimal brushless DC motor design using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.

    2010-11-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.

  18. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  19. Asymptotics in Time, Temperature and Size for Optimization by Simulated Annealing: Theory, Practice and Applications

    DTIC Science & Technology

    1990-01-19

    and studying the growth of this bound as the tem- perature approaches zero asymptotically. Simulated annealing with a time varying temperature gives...rise to a time inhomogeneous Markov chain. This Markov chain is difficult to analyze and study due to the time-inhomogeneity. We have been able to...problem. Moreover, we can study the growth of this bound as the temperature approaches zero or skewness becomes arbitrarily large; thereby, providing

  20. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  1. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding.

    PubMed

    Li, Linguo; Sun, Lijuan; Guo, Jian; Qi, Jin; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  2. Optimization of urea-EnFET based on Ta2O5 layer with post annealing.

    PubMed

    Lue, Cheng-En; Yu, Ting-Chun; Yang, Chia-Ming; Pijanowska, Dorota G; Lai, Chao-Sung

    2011-01-01

    In this study, the urea-enzymatic field effect transistors (EnFETs) were investigated based on pH-ion sensitive field effect transistors (ISFETs) with tantalum pentoxide (Ta(2)O(5)) sensing membranes. In addition, a post N(2) annealing was used to improve the sensing properties. At first, the pH sensitivity, hysteresis, drift, and light induced drift of the ISFETs were evaluated. After the covalent bonding process and urease immobilization, the urea sensitivity of the EnFETs were also investigated and compared with the conventional Si(3)N(4) sensing layer. The ISFETs and EnFETs with annealed Ta(2)O(5) sensing membranes showed the best responses, including the highest pH sensitivity (56.9 mV/pH, from pH 2 to pH 12) and also corresponded to the highest urea sensitivity (61 mV/pC(urea), from 1 mM to 7.5 mM). Besides, the non-ideal factors of pH hysteresis, time drift, and light induced drift of the annealed samples were also lower than the controlled Ta(2)O(5) and Si(3)N(4) sensing membranes.

  3. Combinatorial optimization problem solution based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Peng

    2017-08-01

    Traveling salesman problem (TSP) is a classic combinatorial optimization problem. It is a simplified form of many complex problems. In the process of study and research, it is understood that the parameters that affect the performance of genetic algorithm mainly include the quality of initial population, the population size, and crossover probability and mutation probability values. As a result, an improved genetic algorithm for solving TSP problems is put forward. The population is graded according to individual similarity, and different operations are performed to different levels of individuals. In addition, elitist retention strategy is adopted at each level, and the crossover operator and mutation operator are improved. Several experiments are designed to verify the feasibility of the algorithm. Through the experimental results analysis, it is proved that the improved algorithm can improve the accuracy and efficiency of the solution.

  4. Optimization of circuits using a constructive learning algorithm

    SciTech Connect

    Beiu, V.

    1997-05-01

    The paper presents an application of a constructive learning algorithm to optimization of circuits. For a given Boolean function f. a fresh constructive learning algorithm builds circuits belonging to the smallest F{sub n,m} class of functions (n inputs and having m groups of ones in their truth table). The constructive proofs, which show how arbitrary Boolean functions can be implemented by this algorithm, are shortly enumerated An interesting aspect is that the algorithm can be used for generating both classical Boolean circuits and threshold gate circuits (i.e. analogue inputs and digital outputs), or a mixture of them, thus taking advantage of mixed analogue/digital technologies. One illustrative example is detailed The size and the area of the different circuits are compared (special cost functions can be used to closer estimate the area and the delay of VLSI implementations). Conclusions and further directions of research are ending the paper.

  5. An active set algorithm for treatment planning optimization.

    PubMed

    Hristov, D H; Fallone, B G

    1997-09-01

    An active set algorithm for optimization of radiation therapy dose planning by intensity modulated beams has been developed. The algorithm employs a conjugate-gradient routine for subspace minimization in order to achieve a higher rate of convergence than the widely used constrained steepest-descent method at the expense of a negligible amount of overhead calculations. The performance of the new algorithm has been compared to that of the constrained steepest-descent method for various treatment geometries and two different objectives. The active set algorithm is found to be superior to the constrained steepest descent, both in terms of its convergence properties and the residual value of the cost functions at termination. Its use can significantly accelerate the design of conformal plans with intensity modulated beams by decreasing the number of time-consuming dose calculations.

  6. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  7. A novel algorithm for spectral interval combination optimization.

    PubMed

    Song, Xiangzhong; Huang, Yue; Yan, Hong; Xiong, Yanmei; Min, Shungeng

    2016-12-15

    In this study, a new wavelength interval selection algorithm named as interval combination optimization (ICO) was proposed under the framework of model population analysis (MPA). In this method, the full spectra are divided into a fixed number of equal-width intervals firstly. Then the optimal interval combination is searched iteratively under the guide of MPA in a soft shrinkage manner, among which weighted bootstrap sampling (WBS) is employed as random sampling method. Finally, local search is conducted to optimize the widths of selected intervals. Three NIR datasets were used to validate the performance of ICO algorithm. Results show that ICO can select fewer wavelengths with better prediction performance when compared with other four wavelength selection methods, including VISSA, VISSA-iPLS, iVISSA and GA-iPLS. In addition, the computational intensity of ICO is also economical, benefit from fewer tune parameters and faster convergence speed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Genetic algorithms for the construction of D-optimal designs

    SciTech Connect

    Heredia-Langner, Alejandro; Carlyle, W M.; Montgomery, D C.; Borror, Connie M.; Runger, George C.

    2003-01-01

    Computer-generated designs are useful for situations where standard factorial, fractional factorial or response surface designs cannot be easily employed. Alphabetically-optimal designs are the most widely used type of computer-generated designs, and of these, the D-optimal (or D-efficient) class of designs are extremely popular. D-optimal designs are usually constructed by algorithms that sequentially add and delete points from a potential design based using a candidate set of points spaced over the region of interest. We present a technique to generate D-efficient designs using genetic algorithms (GA). This approach eliminates the need to explicitly consider a candidate set of experimental points and it can handle highly constrained regions while maintaining a level of performance comparable to more traditional design construction techniques.

  9. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  10. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  11. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.

  12. A comparison of anatomy-based inverse planning with simulated annealing and graphical optimization for high-dose-rate prostate brachytherapy.

    PubMed

    Morton, Gerard C; Sankreacha, Raxa; Halina, Patrick; Loblaw, Andrew

    2008-01-01

    Dose distribution in a high-dose-rate (HDR) brachytherapy implant is optimized by adjusting source dwell positions and dwell times along the implanted catheters. Inverse planning with fast simulated annealing (IPSA) is a recently developed algorithm for anatomy-based inverse planning, capable of generating an optimized plan in less than 1min. The purpose of this study is to compare dose distributions achieved using IPSA to those obtained with a graphical optimization (GrO) algorithm for prostate HDR brachytherapy. This is a retrospective study of 63 consecutive prostate HDR brachytherapy implants planned and treated using on-screen GrO to a dose of 10Gy per implant. All plans were then recalculated using IPSA, without changing any parameters (contours, catheters, number, or location of dwell positions). The IPSA and GrO plans were compared with respect to target coverage, conformality, dose homogeneity, and normal tissue dose. The mean volume of target treated to 100% of prescription dose (V(100)) was 97.1% and 96.7%, and mean Conformal Index 0.71 and 0.68 with GrO and IPSA, respectively. IPSA plans had a higher mean homogeneity index (0.69 vs. 0.63, p<0.001) and lower volume of target receiving 150% (30.2% vs. 35.6%, p<0.001) and 200% (10.7% vs. 12.7%, p<0.001) of the prescription dose. Mean dose to urethra, rectum, and bladder were all significantly lower with IPSA (p<0.001). IPSA plans tended to be more reproducible, with smaller standard deviations for all measured parameters. Plans generated using IPSA provide similar target coverage to those obtained using GrO but with lower dose to normal structures and greater dose homogeneity.

  13. Layout Optimization Method for Magnetic Circuit using Multi-step Utilization of Genetic Algorithm Combined with Design Space Reduction

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshifumi; Tominaga, Yusuke; Sato, Shuji

    The layout optimization with the ON-OFF information of magnetic material in finite elements is one of the most attractive tools in initial conceptual and practical design of electrical machinery for engineers. The heuristic algorithms based on the random search allow the engineers to define the general-purpose objects, however, there are many iterations of finite element analysis, and it is difficult to realize the practical solution without island and void distribution by using direct search method, for example, simulated annealing (SA), genetic algorithm (GA), and so on. This paper presents the layout optimization method based on GA. Proposed method can arrive at the practical solution by means of multi-step utilization of GA, and the convergence speed is considerably improved by using the combination with the reduction process of design space.

  14. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  15. Threshold optimization of adaptive template filtering for MRI based on intelligent optimization algorithm.

    PubMed

    Guo, Lei; Wu, Youxi; Liu, Xuena; Li, Ying; Xu, Guizhi; Yan, Weili

    2006-01-01

    Intelligent Optimization Algorithm (IOA) mainly includes Immune Algorithm (IA) and Genetic Algorithm (GA). One of the most important characteristics of MRI is the complicated changes of gray level. Traditional filtering algorithms are not fit for MRI. Adaptive Template Filtering Method (ATFM) is an appropriate denoising method for MRI. However, selecting threshold for ATFM is a complicated problem which directly affects the denoising result. Threshold selection has been based on experience. Thus, it was lack of solid theoretical foundation. In this paper, 2 kinds of IOA are proposed for threshold optimization respectively. As our experiment demonstrates, they can effectively solve the problem of threshold selection and perfect ATFM. Through algorithm analysis, the performance of IA surpasses the performance of GA. As a new kind of IOA, IA exhibits its great potential in image processing.

  16. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    PubMed

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  17. STP: A Stochastic Tunneling Algorithm for Global Optimization

    SciTech Connect

    Oblow, E.M.

    1999-05-20

    A stochastic approach to solving continuous function global optimization problems is presented. It builds on the tunneling approach to deterministic optimization presented by Barhen et al, by combining a series of local descents with stochastic searches. The method uses a rejection-based stochastic procedure to locate new local minima descent regions and a fixed Lipschitz-like constant to reject unpromising regions in the search space, thereby increasing the efficiency of the tunneling process. The algorithm is easily implemented in low-dimensional problems and scales easily to large problems. It is less effective without further heuristics in these latter cases, however. Several improvements to the basic algorithm which make use of approximate estimates of the algorithms parameters for implementation in high-dimensional problems are also discussed. Benchmark results are presented, which show that the algorithm is competitive with the best previously reported global optimization techniques. A successful application of the approach to a large-scale seismology problem of substantial computational complexity using a low-dimensional approximation scheme is also reported.

  18. Feature selection for optimized skin tumor recognition using genetic algorithms.

    PubMed

    Handels, H; Ross, T; Kreusch, J; Wolff, H H; Pöppl, S J

    1999-07-01

    In this paper, a new approach to computer supported diagnosis of skin tumors in dermatology is presented. High resolution skin surface profiles are analyzed to recognize malignant melanomas and nevocytic nevi (moles), automatically. In the first step, several types of features are extracted by 2D image analysis methods characterizing the structure of skin surface profiles: texture features based on cooccurrence matrices, Fourier features and fractal features. Then, feature selection algorithms are applied to determine suitable feature subsets for the recognition process. Feature selection is described as an optimization problem and several approaches including heuristic strategies, greedy and genetic algorithms are compared. As quality measure for feature subsets, the classification rate of the nearest neighbor classifier computed with the leaving-one-out method is used. Genetic algorithms show the best results. Finally, neural networks with error back-propagation as learning paradigm are trained using the selected feature sets. Different network topologies, learning parameters and pruning algorithms are investigated to optimize the classification performance of the neural classifiers. With the optimized recognition system a classification performance of 97.7% is achieved.

  19. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  20. Optimization in optical systems revisited: Beyond genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  1. Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng

    2009-10-01

    The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.

  2. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  3. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  4. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  5. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    EPA Pesticide Factsheets

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  6. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE PAGES

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...

    2017-05-18

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  7. Fine-Tuning ADAS Algorithm Parameters for Optimizing Traffic ...

    EPA Pesticide Factsheets

    With the development of the Connected Vehicle technology that facilitates wirelessly communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at various highway facilities. To this end, the traffic management centers identify the optimal ADAS algorithm parameter set that enables the maximum improvement of the traffic safety and mobility performance, and broadcast the optimal parameter set wirelessly to individual ADAS-equipped vehicles. After adopting the optimal parameter set, the ADAS-equipped drivers become active agents in the traffic stream that work collectively and consistently to prevent traffic conflicts, lower the intensity of traffic disturbances, and suppress the development of traffic oscillations into heavy traffic jams. Successful implementation of this objective requires the analysis capability of capturing the impact of the ADAS on driving behaviors, and measuring traffic safety and mobility performance under the influence of the ADAS. To address this challenge, this research proposes a synthetic methodology that incorporates the ADAS-affected driving behavior modeling and state-of-the-art microscopic traffic flow modeling into a virtually simulated environment. Building on such an environment, the optimal ADAS algorithm parameter set is identified through an optimization programming framework to enable th

  8. Nonconvex compressed sensing by nature-inspired optimization algorithms.

    PubMed

    Liu, Fang; Lin, Leping; Jiao, Licheng; Li, Lingling; Yang, Shuyuan; Hou, Biao; Ma, Hongmei; Yang, Li; Xu, Jinghuan

    2015-05-01

    The l 0 regularized problem in compressed sensing reconstruction is nonconvex with NP-hard computational complexity. Methods available for such problems fall into one of two types: greedy pursuit methods and thresholding methods, which are characterized by suboptimal fast search strategies. Nature-inspired algorithms for combinatorial optimization are famous for their efficient global search strategies and superior performance for nonconvex and nonlinear problems. In this paper, we study and propose nonconvex compressed sensing for natural images by nature-inspired optimization algorithms. We get measurements by the block-based compressed sampling and introduce an overcomplete dictionary of Ridgelet for image blocks. An atom of this dictionary is identified by the parameters of direction, scale and shift. Of them, direction parameter is important for adapting to directional regularity. So we propose a two-stage reconstruction scheme (TS_RS) of nature-inspired optimization algorithms. In the first reconstruction stage, we design a genetic algorithm for a class of image blocks to acquire the estimation of atomic combinations in all directions; and in the second reconstruction stage, we adopt clonal selection algorithm to search better atomic combinations in the sub-dictionary resulted by the first stage for each image block further on scale and shift parameters. In TS_RS, to reduce the uncertainty and instability of the reconstruction problems, we adopt novel and flexible heuristic searching strategies, which include delicately designing the initialization, operators, evaluating methods, and so on. The experimental results show the efficiency and stability of the proposed TS_RS of nature-inspired algorithms, which outperforms classic greedy and thresholding methods.

  9. Optimization of an antenna array using genetic algorithms

    SciTech Connect

    Kiehbadroudinezhad, Shahideh; Noordin, Nor Kamariah; Sali, A.; Abidin, Zamri Zainal

    2014-06-01

    An array of antennas is usually used in long distance communication. The observation of celestial objects necessitates a large array of antennas, such as the Giant Metrewave Radio Telescope (GMRT). Optimizing this kind of array is very important when observing a high performance system. The genetic algorithm (GA) is an optimization solution for these kinds of problems that reconfigures the position of antennas to increase the u-v coverage plane or decrease the sidelobe levels (SLLs). This paper presents how to optimize a correlator antenna array using the GA. A brief explanation about the GA and operators used in this paper (mutation and crossover) is provided. Then, the results of optimization are discussed. The results show that the GA provides efficient and optimum solutions among a pool of candidate solutions in order to achieve the desired array performance for the purposes of radio astronomy. The proposed algorithm is able to distribute the u-v plane more efficiently than GMRT with a more than 95% distribution ratio at snapshot, and to fill the u-v plane from a 20% to more than 68% filling ratio as the number of generations increases in the hour tracking observations. Finally, the algorithm is able to reduce the SLL to –21.75 dB.

  10. Facial Skin Segmentation Using Bacterial Foraging Optimization Algorithm

    PubMed Central

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-01-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%). PMID:23724370

  11. Facial skin segmentation using bacterial foraging optimization algorithm.

    PubMed

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-10-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%).

  12. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  13. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  14. Optimization of Circular Ring Microstrip Antenna Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Sathi, V.; Ghobadi, Ch.; Nourinia, J.

    2008-10-01

    Circular ring microstrip antennas have several interesting properties that make it attractive in wireless applications. Although several analysis techniques such as cavity model, generalized transmission line model, Fourier-Hankel transform domain and the method of matched asymptotic expansion have been studied by researchers, there is no efficient design tool that has been incorporated with a suitable optimization algorithm. In this paper, the cavity model analysis along with the genetic optimization algorithm is presented for the design of circular ring microstrip antennas. The method studied here is based on the well-known cavity model and the optimization of the dimensions and feed point location of the circular ring antenna is performed via the genetic optimization algorithm, to achieve an acceptable antenna operation around a desired resonance frequency. The antennas designed by this efficient design procedure were realized experimentally, and the results are compared. In addition, these results are also compared to the results obtained by the commercial electromagnetic simulation tool, the FEM based software, HFSS by ANSOFT.

  15. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    PubMed Central

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  16. Study of sequential optimal control algorithm smart isolation structure based on Simulink-S function

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohuan; Liu, Yanhui

    2017-01-01

    The study of this paper focuses on smart isolation structure, a method for realizing structural vibration control by using Simulink simulation is proposed according to the proposed sequential optimal control algorithm. In the Simulink simulation environment, A smart isolation structure is used to compare the control effect of three algorithms, i.e., classical optimal control algorithm, linear quadratic gaussian control algorithm and sequential optimal control algorithm under the condition of sensor contaminated with noise. Simulation results show that this method can be applied to the simulation of sequential optimal control algorithm and the proposed sequential optimal control algorithm has a good ability of resisting the noise and better control efficiency.

  17. Acceleration of quantum optimal control theory algorithms with mixing strategies.

    PubMed

    Castro, Alberto; Gross, E K U

    2009-05-01

    We propose the use of mixing strategies to accelerate the convergence of the common iterative algorithms utilized in quantum optimal control theory (QOCT). We show how the nonlinear equations of QOCT can be viewed as a "fixed-point" nonlinear problem. The iterative algorithms for this class of problems may benefit from mixing strategies, as it happens, e.g., in the quest for the ground-state density in Kohn-Sham density-functional theory. We demonstrate, with some numerical examples, how the same mixing schemes utilized in this latter nonlinear problem may significantly accelerate the QOCT iterative procedures.

  18. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  19. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  20. Optimization of multicast optical networks with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng

    2007-11-01

    In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.

  1. Global structual optimizations of surface systems with a genetic algorithm

    SciTech Connect

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Aln algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems.

  2. Unraveling Quantum Annealers using Classical Hardness

    NASA Astrophysics Data System (ADS)

    Martin-Mayor, Victor; Hen, Itay

    2015-10-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.

  3. Unraveling Quantum Annealers using Classical Hardness.

    PubMed

    Martin-Mayor, Victor; Hen, Itay

    2015-10-20

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.

  4. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  5. Multi-objective nested algorithms for optimal reservoir operation

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Solomatine, Dimitri

    2016-04-01

    The optimal reservoir operation is in general a multi-objective problem, meaning that multiple objectives are to be considered at the same time. For solving multi-objective optimization problems there exist a large number of optimization algorithms - which result in a generation of a Pareto set of optimal solutions (typically containing a large number of them), or more precisely, its approximation. At the same time, due to the complexity and computational costs of solving full-fledge multi-objective optimization problems some authors use a simplified approach which is generically called "scalarization". Scalarization transforms the multi-objective optimization problem to a single-objective optimization problem (or several of them), for example by (a) single objective aggregated weighted functions, or (b) formulating some objectives as constraints. We are using the approach (a). A user can decide how many multi-objective single search solutions will generate, depending on the practical problem at hand and by choosing a particular number of the weight vectors that are used to weigh the objectives. It is not guaranteed that these solutions are Pareto optimal, but they can be treated as a reasonably good and practically useful approximation of a Pareto set, albeit small. It has to be mentioned that the weighted-sum approach has its known shortcomings because the linear scalar weights will fail to find Pareto-optimal policies that lie in the concave region of the Pareto front. In this context the considered approach is implemented as follows: there are m sets of weights {w1i, …wni} (i starts from 1 to m), and n objectives applied to single objective aggregated weighted sum functions of nested dynamic programming (nDP), nested stochastic dynamic programming (nSDP) and nested reinforcement learning (nRL). By employing the multi-objective optimization by a sequence of single-objective optimization searches approach, these algorithms acquire the multi-objective properties

  6. Integer programming model for optimizing bus timetable using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  7. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  8. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  9. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  10. Recent progress of quantum annealing

    SciTech Connect

    Suzuki, Sei

    2015-03-10

    We review the recent progress of quantum annealing. Quantum annealing was proposed as a method to solve generic optimization problems. Recently a Canadian company has drawn a great deal of attention, as it has commercialized a quantum computer based on quantum annealing. Although the performance of quantum annealing is not sufficiently understood, it is likely that quantum annealing will be a practical method both on a conventional computer and on a quantum computer.

  11. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  12. Asymptotically optimal probes for noisy interferometry via quantum annealing to criticality

    NASA Astrophysics Data System (ADS)

    Durkin, Gabriel A.

    2016-10-01

    Quantum annealing is explored as a resource for quantum information beyond solution of classical combinatorial problems. Envisaged as a generator of robust interferometric probes, we examine a Hamiltonian of N ≫1 uniformly coupled spins subject to a transverse magnetic field. The discrete many-body problem is mapped onto dynamics of a single one-dimensional particle in a continuous potential. This reveals all the qualitative features of the ground state beyond typical mean-field or large classical spin models. It illustrates explicitly a graceful warping from an entangled unimodal to bimodal ground state in the phase transition region. The transitional "Goldilocks" probe has a component distribution of width N2 /3 and exhibits characteristics for enhanced phase estimation in a decoherent environment. In the presence of realistic local noise and collective dephasing, we find this probe state asymptotically saturates ultimate precision bounds calculated previously. By reducing the transverse field adiabatically, the Goldilocks probe is prepared in advance of the minimum gap bottleneck, allowing the annealing schedule to be terminated "early." Adiabatic time complexity of probe preparation is shown to be linear in N .

  13. Bit patterned media optimization at 1 Tdot/in{sup 2} by post-annealing

    SciTech Connect

    Hellwig, Olav; Marinero, Ernesto E.; Kercher, Dan; Hennen, Tyler; McCallum, Andrew; Dobisz, Elizabeth; Wu, Tsai-Wei; Lille, Jeff; Hirano, Toshiki; Ruiz, Ricardo; Grobis, Michael K.; Weller, Dieter; Albrecht, Thomas R.

    2014-09-28

    We report on the fabrication of 1 Tdot/in{sup 2} bit patterned media with high coercivity (H{sub C}) and narrow intrinsic switching field distribution (iSFD) based on nanoimprint from a master pattern formed by e-beam guided block copolymer assembly onto a carbon hard mask and subsequent pattern transfer via etching into a thin CoCrPt perpendicular anisotropy recording layer. We demonstrate that an additional vacuum annealing step after pattern transfer into the CoCrPt layer and after Carbon hard mask removal not only yields recovery from undesired damage of the island edges, but actually transforms the islands into a magnetically more favorable compositional phase with higher H{sub C}, lower iSFD/H{sub C}, and three-fold increased thermal stability. Energy filtered transmission electron microscopy analysis reveals that the diffusion of Cr from the island cores to the periphery of the islands during post-annealing is responsible for the transformation of the magnetic bits into a more stable state.

  14. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  15. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO.

  16. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  17. Optimization of broadband semiconductor chirped mirrors with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dems, Maciej; Wnuk, Paweł; Wasylczyk, Piotr; Zinkiewicz, Łukasz; Wójcik-Jedlińska, Anna; Regiński, Kazimierz; Hejduk, Krzysztof; Jasik, Agata

    2016-10-01

    Genetic algorithm was applied for optimization of dispersion properties in semiconductor Bragg reflectors for applications in femtosecond lasers. Broadband, large negative group-delay dispersion was achieved in the optimized design: The group-delay dispersion (GDD) as large as -3500 fs2 was theoretically obtained over a 10-nm bandwidth. The designed structure was manufactured and tested, providing GDD -3320 fs2 over a 7-nm bandwidth. The mirror performance was verified in semiconductor structures grown with molecular beam epitaxy. The mirror was tested in a passively mode-locked Yb:KYW laser.

  18. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  19. A versatile multi-objective FLUKA optimization using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Vlachoudis, Vasilis; Antoniucci, Guido Arnau; Mathot, Serge; Kozlowska, Wioletta Sandra; Vretenar, Maurizio

    2017-09-01

    Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.

  20. An adaptive penalty method for DIRECT algorithm in engineering optimization

    NASA Astrophysics Data System (ADS)

    Vilaça, Rita; Rocha, Ana Maria A. C.

    2012-09-01

    The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.

  1. Genetic algorithm application in optimization of wireless sensor networks.

    PubMed

    Norouzi, Ali; Zaim, A Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs.

  2. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  3. Quantum algorithm for molecular properties and geometry optimization.

    PubMed

    Kassal, Ivan; Aspuru-Guzik, Alán

    2009-12-14

    Quantum computers, if available, could substantially accelerate quantum simulations. We extend this result to show that the computation of molecular properties (energy derivatives) could also be sped up using quantum computers. We provide a quantum algorithm for the numerical evaluation of molecular properties, whose time cost is a constant multiple of the time needed to compute the molecular energy, regardless of the size of the system. Molecular properties computed with the proposed approach could also be used for the optimization of molecular geometries or other properties. For that purpose, we discuss the benefits of quantum techniques for Newton's method and Householder methods. Finally, global minima for the proposed optimizations can be found using the quantum basin hopper algorithm, which offers an additional quadratic reduction in cost over classical multi-start techniques.

  4. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  5. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  6. Optimization of telescope scheduling. Algorithmic research and scientific policy

    NASA Astrophysics Data System (ADS)

    Gómez de Castro, A. I.; Yáñez, J.

    2003-05-01

    The use of very expensive facilities in Modern Astronomy has demonstrated the importance of automatic modes in the operation of large telescopes. As a consequence, several mathematical tools have been applied and developed to solve the (NP-hard) scheduling optimization problem: from simple heuristics to the more complex genetic algorithms or neural networks. In this work, the basic scheduling problem is translated into mathematical language and two main methods are used to solve it: neighborhood search methods and genetic algorithms; both of them are analysed. It is shown that the algorithms are sensitive to the scientific policy by means of the definition of the objective function (F) and also by the assignment of scientific priorities to the projects. The definition of F is not trivial and requires a detailed discussion among the Astronomical Community.

  7. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  8. Remote sensing of atmospheric duct parameters using simulated annealing

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-Feng; Huang, Si-Xun; Xiang, Jie; Shi, Wei-Lai

    2011-09-01

    Simulated annealing is one of the robust optimization schemes. Simulated annealing mimics the annealing process of the slow cooling of a heated metal to reach a stable minimum energy state. In this paper, we adopt simulated annealing to study the problem of the remote sensing of atmospheric duct parameters for two different geometries of propagation measurement. One is from a single emitter to an array of radio receivers (vertical measurements), and the other is from the radar clutter returns (horizontal measurements). Basic principles of simulated annealing and its applications to refractivity estimation are introduced. The performance of this method is validated using numerical experiments and field measurements collected at the East China Sea. The retrieved results demonstrate the feasibility of simulated annealing for near real-time atmospheric refractivity estimation. For comparison, the retrievals of the genetic algorithm are also presented. The comparisons indicate that the convergence speed of simulated annealing is faster than that of the genetic algorithm, while the anti-noise ability of the genetic algorithm is better than that of simulated annealing.

  9. A hierarchical evolutionary algorithm for multiobjective optimization in IMRT

    PubMed Central

    Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H.

    2010-01-01

    Purpose: The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. Results: The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12–15 plans, any random plan selected from a MOEA

  10. Optimizing phase-estimation algorithms for diamond spin magnetometry

    NASA Astrophysics Data System (ADS)

    Nusran, N. M.; Dutt, M. V. Gurudev

    2014-07-01

    We present a detailed theoretical and numerical study discussing the application and optimization of phase-estimation algorithms (PEAs) to diamond spin magnetometry. We compare standard Ramsey magnetometry, the nonadaptive PEA (NAPEA), and quantum PEA (QPEA) incorporating error checking. Our results show that the NAPEA requires lower measurement fidelity, has better dynamic range, and greater consistency in sensitivity. We elucidate the importance of dynamic range to Ramsey magnetic imaging with diamond spins, and introduce the application of PEAs to time-dependent magnetometry.

  11. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  12. Optimizing remediation of an unconfined aquifer using a hybrid algorithm.

    PubMed

    Hsiao, Chin-Tsai; Chang, Liang-Cheng

    2005-01-01

    We present a novel hybrid algorithm, integrating a genetic algorithm (GA) and constrained differential dynamic programming (CDDP), to achieve remediation planning for an unconfined aquifer. The objective function includes both fixed and dynamic operation costs. GA determines the primary structure of the proposed algorithm, and a chromosome therein implemented by a series of binary digits represents a potential network design. The time-varying optimal operation cost associated with the network design is computed by the CDDP, in which is embedded a numerical transport model. Several computational approaches, including a chromosome bookkeeping procedure, are implemented to alleviate computational loading. Additionally, case studies that involve fixed and time-varying operating costs for confined and unconfined aquifers, respectively, are discussed to elucidate the effectiveness of the proposed algorithm. Simulation results indicate that the fixed costs markedly affect the optimal design, including the number and locations of the wells. Furthermore, the solution obtained using the confined approximation for an unconfined aquifer may be infeasible, as determined by an unconfined simulation.

  13. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  14. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  15. Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework

    SciTech Connect

    Alicia Hofler, Pavel Evtushenko, Frank Marhauser

    2009-09-01

    Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.

  16. Optimization of the deposition and annealing conditions of fluorine-doped indium oxide films for silicon solar cells

    SciTech Connect

    Untila, G. G. Kost, T. N.; Chebotareva, A. B.; Timofeyev, M. A.

    2013-03-15

    Fluorine-doped indium oxide (IFO) films are deposited onto (pp{sup +})Si and (n{sup +}nn{sup +})Si structures made of single-crystal silicon by ultrasonic spray pyrolysis. The effect of the IFO deposition time and annealing time in an argon atmosphere with methanol vapor on the IFO chemical composition, the photovoltage and fill factor of the Illumination-U{sub oc} curves of IFO/(pp{sup +})Si structures, and the sheet resistance of IFO/(n{sup +}nn{sup +})Si structures, correlating with the IFO/(n{sup +})Si contact resistance, is studied. The obtained features are explained by modification of the properties of the SiO{sub x} transition layer at the IFO/Si interface during deposition and annealing. Analysis of the results made it possible to optimize the fabrication conditions of solar cells based on IFO/(pp{sup +})Si heterostructures and to increase their efficiency from 17% to a record 17.8%.

  17. Applications of an MPI Enhanced Simulated Annealing Algorithm on nuSTORM and 6D Muon Cooling

    SciTech Connect

    Liu, A.

    2015-06-01

    The nuSTORM decay ring is a compact racetrack storage ring with a circumference ~480 m using large aperture ($\\phi$ = 60 cm) magnets. The design goal of the ring is to achieve a momentum acceptance of 3.8 $\\pm$10% GeV/c and a phase space acceptance of 2000 $\\mu$m·rad. The design has many challenges because the acceptance will be affected by many nonlinearity terms with large particle emittance and/or large momentum offset. In this paper, we present the application of a meta-heuristic optimization algorithm to the sextupole correction in the ring. The algorithm is capable of finding a balanced compromise among corrections of the nonlinearity terms, and finding the largest acceptance. This technique can be applied to the design of similar storage rings that store beams with wide transverse phase space and momentum spectra. We also present the recent study on the application of this algorithm to a part of the 6D muon cooling channel. The technique and the cooling concept will be applied to design a cooling channel for the extracted muon beam at nuSTORM in the future study.

  18. Constrained Multi-Level Algorithm for Trajectory Optimization

    NASA Astrophysics Data System (ADS)

    Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi

    The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in

  19. Quantum-based algorithm for optimizing artificial neural networks.

    PubMed

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.

  20. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  1. Optimal robust motion controller design using multiobjective genetic algorithm.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm-differential evolution.

  2. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  3. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  4. An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel

    NASA Astrophysics Data System (ADS)

    Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.

    2015-01-01

    This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.

  5. Resistive Network Optimal Power Flow: Uniqueness and Algorithms

    SciTech Connect

    Tan, CW; Cai, DWH; Lou, X

    2015-01-01

    The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.

  6. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  7. GenAnneal: Genetically modified Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, Isaac E.

    2006-05-01

    A modification of the standard Simulated Annealing (SA) algorithm is presented for finding the global minimum of a continuous multidimensional, multimodal function. We report results of computational experiments with a set of test functions and we compare to methods of similar structure. The accompanying software accepts objective functions coded both in Fortran 77 and C++. Program summaryTitle of program:GenAnneal Catalogue identifier:ADXI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXI_v1_0 Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece on Linux based machines Programming language used:GNU-C++, GNU-C, GNU Fortran 77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.:84 885 No. of lines in distributed program, including test data, etc.:14 896 Distribution format: tar.gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Typical running time: Depending on the objective function. Method of solution: We modified the process of step selection that the traditional Simulated

  8. A heterogeneous algorithm for PDT dose optimization for prostate

    PubMed Central

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S.B.; Hahn, Stephen M.

    2015-01-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT. PMID:25914793

  9. A heterogeneous algorithm for PDT dose optimization for prostate

    NASA Astrophysics Data System (ADS)

    Altschuler, Martin D.; Zhu, Timothy C.; Hu, Yida; Finlay, Jarod C.; Dimofte, Andreea; Wang, Ken; Li, Jun; Cengel, Keith; Malkowicz, S. B.; Hahn, Stephen M.

    2009-02-01

    The object of this study is to develop optimization procedures that account for both the optical heterogeneity as well as photosensitizer (PS) drug distribution of the patient prostate and thereby enable delivery of uniform photodynamic dose to that gland. We use the heterogeneous optical properties measured for a patient prostate to calculate a light fluence kernel (table). PS distribution is then multiplied with the light fluence kernel to form the PDT dose kernel. The Cimmino feasibility algorithm, which is fast, linear, and always converges reliably, is applied as a search tool to choose the weights of the light sources to optimize PDT dose. Maximum and minimum PDT dose limits chosen for sample points in the prostate constrain the solution for the source strengths of the cylindrical diffuser fibers (CDF). We tested the Cimmino optimization procedures using the light fluence kernel generated for heterogeneous optical properties, and compared the optimized treatment plans with those obtained using homogeneous optical properties. To study how different photosensitizer distributions in the prostate affect optimization, comparisons of light fluence rate and PDT dose distributions were made with three distributions of photosensitizer: uniform, linear spatial distribution, and the measured PS distribution. The study shows that optimization of individual light source positions and intensities are feasible for the heterogeneous prostate during PDT.

  10. Coil optimization for electromagnetic levitation using a genetic like algorithm

    NASA Astrophysics Data System (ADS)

    Royer, Z. L.; Tackes, C.; LeSar, R.; Napolitano, R. E.

    2013-06-01

    The technique of electromagnetic levitation (EML) provides a means for thermally processing an electrically conductive specimen in a containerless manner. For the investigation of metallic liquids and related melting or freezing transformations, the elimination of substrate-induced nucleation affords access to much higher undercooling than otherwise attainable. With heating and levitation both arising from the currents induced by the coil, the performance of any EML system depends on controlling the balance between lifting forces and heating effects, as influenced by the levitation coil geometry. In this work, a genetic algorithm is developed and utilized to optimize the design of electromagnetic levitation coils. The optimization is targeted specifically to reduce the steady-state temperature of the stably levitated metallic specimen. Reductions in temperature of nominally 70 K relative to that obtained with the initial design are achieved through coil optimization, and the results are compared with experiments for aluminum. Additionally, the optimization method is shown to be robust, generating a small range of converged results from a variety of initial starting conditions. While our optimization criterion was set to achieve the lowest possible sample temperature, the method is general and can be used to optimize for other criteria as well.

  11. Exploring photometric redshifts as an optimization problem: an ensemble MCMC and simulated annealing-driven template-fitting approach

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.

    2016-10-01

    Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.

  12. [Research on and application of hybrid optimization algorithm in Brillouin scattering spectrum parameter extraction problem].

    PubMed

    Zhang, Yan-jun; Zhang, Shu-guo; Fu, Guang-wei; Li, Da; Liu, Yin; Bi, Wei-hong

    2012-04-01

    This paper presents a novel algorithm which blends optimize particle swarm optimization (PSO) algorithm and Levenberg-Marquardt (LM) algorithm according to the probability. This novel algorithm can be used for Pseudo-Voigt type of Brillouin scattering spectrum to improve the degree of fitting and precision of shift extraction. This algorithm uses PSO algorithm as the main frame. First, PSO algorithm is used in global search, after a certain number of optimization every time there generates a random probability rand (0, 1). If rand (0, 1) is less than or equal to the predetermined probability P, the optimal solution obtained by PSO algorithm will be used as the initial value of LM algorithm. Then LM algorithm is used in local depth search and the solution of LM algorithm is used to replace the previous PSO algorithm for optimal solutions. Again the PSO algorithm is used for global search. If rand (0, 1) was greater than P, PSO algorithm is still used in search, waiting the next optimization to generate random probability rand (0, 1) to judge. Two kinds of algorithms are alternatively used to obtain ideal global optimal solution. Simulation analysis and experimental results show that the new algorithm overcomes the shortcomings of single algorithm and improves the degree of fitting and precision of frequency shift extraction in Brillouin scattering spectrum, and fully prove that the new method is practical and feasible.

  13. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  14. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  15. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  16. Optimal design of link systems using successive zooming genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Doo; Sohn, Chang-hyun; Kwon, Soon-Bum; Lim, Jae-gyoo

    2009-07-01

    Link-systems have been around for a long time and are still used to control motion in diverse applications such as automobiles, robots and industrial machinery. This study presents a procedure involving the use of a genetic algorithm for the optimal design of single four-bar link systems and a double four-bar link system used in diesel engine. We adopted the Successive Zooming Genetic Algorithm (SZGA), which has one of the most rapid convergence rates among global search algorithms. The results are verified by experiment and the Recurdyn dynamic motion analysis package. During the optimal design of single four-bar link systems, we found in the case of identical input/output (IO) angles that the initial and final configurations show certain symmetry. For the double link system, we introduced weighting factors for the multi-objective functions, which minimize the difference between output angles, providing balanced engine performance, as well as the difference between final output angle and the desired magnitudes of final output angle. We adopted a graphical method to select a proper ratio between the weighting factors.

  17. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  18. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  19. Library design using genetic algorithms for catalyst discovery and optimization

    NASA Astrophysics Data System (ADS)

    Clerc, Frederic; Lengliz, Mourad; Farrusseng, David; Mirodatos, Claude; Pereira, Sílvia R. M.; Rakotomalala, Ricco

    2005-06-01

    This study reports a detailed investigation of catalyst library design by genetic algorithm (GA). A methodology for assessing GA configurations is described. Operators, which promote the optimization speed while being robust to noise and outliers, are revealed through statistical studies. The genetic algorithms were implemented in GA platform software called OptiCat, which enables the construction of custom-made workflows using a tool box of operators. Two separate studies were carried out (i) on a virtual benchmark and (ii) on real surface response which is derived from HT screening. Additionally, we report a methodology to model a complex surface response by binning the search space in small zones that are then independently modeled by linear regression. In contrast to artificial neural networks, this approach allows one to obtain an explicit model in an analogical form that can be further used in Excel or entered in OptiCat to perform simulations. While speeding the implementation of a hybrid algorithm combining a GA with a knowledge-based extraction engine is described, while speeding up the optimization process by means of virtual prescreening the hybrid GA enables one to open the "black-box" by providing knowledge as a set of association rules.

  20. Effective multi-objective optimization with the coral reefs optimization algorithm

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Pastor-Sánchez, A.; Portilla-Figueras, J. A.; Prieto, L.

    2016-06-01

    In this article a new algorithm for multi-objective optimization is presented, the Multi-Objective Coral Reefs Optimization (MO-CRO) algorithm. The algorithm is based on the simulation of processes in coral reefs, such as corals' reproduction and fight for space in the reef. The adaptation to multi-objective problems is a process based on domination or non-domination during the process of fight for space in the reef. The final MO-CRO is an easily-implemented and fast algorithm, simple and robust, since it is able to keep diversity in the population of corals (solutions) in a natural way. The experimental evaluation of this new approach for multi-objective optimization problems is carried out on different multi-objective benchmark problems, where the MO-CRO has shown excellent performance in cases with limited computational resources, and in a real-world problem of wind speed prediction, where the MO-CRO algorithm is used to find the best set of features to predict the wind speed, taking into account two objective functions related to the performance of the prediction and the computation time of the regressor.

  1. Low-Thrust Trajectory Optimization with Simplified SQP Algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, Nathan L.; Scheeres, Daniel J.

    2017-01-01

    The problem of low-thrust trajectory optimization in highly perturbed dynamics is a stressing case for many optimization tools. Highly nonlinear dynamics and continuous thrust are each, separately, non-trivial problems in the field of optimal control, and when combined, the problem is even more difficult. This paper de-scribes a fast, robust method to design a trajectory in the CRTBP (circular restricted three body problem), beginning with no or very little knowledge of the system. The approach is inspired by the SQP (sequential quadratic programming) algorithm, in which a general nonlinear programming problem is solved via a sequence of quadratic problems. A few key simplifications make the algorithm presented fast and robust to initial guess: a quadratic cost function, neglecting the line search step when the solution is known to be far away, judicious use of end-point constraints, and mesh refinement on multiple shooting with fixed-step integration.In comparison to the traditional approach of plugging the problem into a “black-box” NLP solver, the methods shown converge even when given no knowledge of the solution at all. It was found that the only piece of information that the user needs to provide is a rough guess for the time of flight, as the transfer time guess will dictate which set of local solutions the algorithm could converge on. This robustness to initial guess is a compelling feature, as three-body orbit transfers are challenging to design with intuition alone. Of course, if a high-quality initial guess is available, the methods shown are still valid.We have shown that endpoints can be efficiently constrained to lie on 3-body repeating orbits, and that time of flight can be optimized as well. When optimizing the endpoints, we must make a trade between converging quickly on sub-optimal endpoints or converging more slowly on end-points that are arbitrarily close to optimal. It is easy for the mission design engineer to adjust this trade based on

  2. A particle swarm optimization algorithm with random learning mechanism and Levy flight for optimization of atomic clusters

    NASA Astrophysics Data System (ADS)

    Yan, Bailu; Zhao, Zheng; Zhou, Yingcheng; Yuan, Wenyan; Li, Jian; Wu, Jun; Cheng, Daojian

    2017-10-01

    Swarm intelligence optimization algorithms are mainstream algorithms for solving complex optimization problems. Among these algorithms, the particle swarm optimization (PSO) algorithm has the advantages of fast computation speed and few parameters. However, PSO is prone to premature convergence. To solve this problem, we develop a new PSO algorithm (RPSOLF) by combining the characteristics of random learning mechanism and Levy flight. The RPSOLF algorithm increases the diversity of the population by learning from random particles and random walks in Levy flight. On the one hand, we carry out a large number of numerical experiments on benchmark test functions, and compare these results with the PSO algorithm with Levy flight (PSOLF) algorithm and other PSO variants in previous reports. The results show that the optimal solution can be found faster and more efficiently by the RPSOLF algorithm. On the other hand, the RPSOLF algorithm can also be applied to optimize the Lennard-Jones clusters, and the results indicate that the algorithm obtains the optimal structure (2-60 atoms) with an extraordinary high efficiency. In summary, RPSOLF algorithm proposed in our paper is proved to be an extremely effective tool for global optimization.

  3. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  4. Genetic algorithm optimized triply compensated pulses in NMR spectroscopy.

    PubMed

    Manu, V S; Veglia, Gianluigi

    2015-11-01

    Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π/2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-(13)C, (15)N NAVL peptide as well as U-(13)C, (15)N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.

  5. Genetic Algorithm Optimized Triply Compensated Pulses in NMR Spectroscopy

    PubMed Central

    Manu, V. S.; Veglia, Gianluigi

    2015-01-01

    Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature’s evolutionary processes. The newly designed π and π/2 pulses belong to the ‘Type A’ (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U – 13C, 15N NAVL peptide as well as U – 13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences. PMID:26473327

  6. Cardiopulmonary resuscitation algorithms, defibrillation and optimized ventilation during resuscitation.

    PubMed

    Samson, Ricardo A; Berg, Marc D; Berg, Robert A

    2006-04-01

    In 2005, the American Heart Association released its Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. This article reviews the treatment algorithms for Advanced Cardiac Life Support, citing the evidence on which the Guidelines are based. Additional focus is placed on defibrillation and optimized ventilation. Major changes include a reorganization of the algorithms for cardiac arrest. Emphasis on effective cardiopulmonary resuscitation is placed as the key to improved survival. Single defibrillation shocks are recommended (compared with three 'stacked' shocks) with immediate provision of cardiopulmonary resuscitation and minimal interruptions in chest compressions. The recommended chest compression : ventilation rate for single rescuers has been changed to 30:2. Despite advances in resuscitation science, basic life support remains the key to improving survival outcomes. Ultimately, as new knowledge is gained, we believe resuscitation therapies will be more individualized, on the basis of pathophysiology and etiology of the initial cardiac arrest.

  7. An optimal algorithm for computing all subtree repeats in trees

    PubMed Central

    Flouri, T.; Kobert, K.; Pissis, S. P.; Stamatakis, A.

    2014-01-01

    Given a labelled tree T, our goal is to group repeating subtrees of T into equivalence classes with respect to their topologies and the node labels. We present an explicit, simple and time-optimal algorithm for solving this problem for unrooted unordered labelled trees and show that the running time of our method is linear with respect to the size of T. By unordered, we mean that the order of the adjacent nodes (children/neighbours) of any node of T is irrelevant. An unrooted tree T does not have a node that is designated as root and can also be referred to as an undirected tree. We show how the presented algorithm can easily be modified to operate on trees that do not satisfy some or any of the aforementioned assumptions on the tree structure; for instance, how it can be applied to rooted, ordered or unlabelled trees. PMID:24751873

  8. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  9. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  10. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    SciTech Connect

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  11. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    Athans, M. (Editor); Willsky, A. S. (Editor)

    1982-01-01

    The analysis and design of complex multivariable reliable control systems are considered. High performance and fault tolerant aircraft systems are the objectives. A preliminary feasibility study of the design of a lateral control system for a VTOL aircraft that is to land on a DD963 class destroyer under high sea state conditions is provided. Progress in the following areas is summarized: (1) VTOL control system design studies; (2) robust multivariable control system synthesis; (3) adaptive control systems; (4) failure detection algorithms; and (5) fault tolerant optimal control theory.

  12. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2014-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  13. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2015-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  14. Population Induced Instabilities in Genetic Algorithms for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Vlachos, D. S.; Parousis-Orthodoxou, K. J.

    2013-02-01

    Evolutionary computation techniques, like genetic algorithms, have received a lot of attention as optimization techniques but, although they exhibit a very promising potential in curing the problem, they have not produced a significant breakthrough in the area of systematic treatment of constraints. There are two mainly ways of handling the constraints: the first is to produce an infeasibility measure and add it to the general cost function (the well known penalty methods) and the other is to modify the mutation and crossover operation in a way that they only produce feasible members. Both methods have their drawbacks and are strongly correlated to the problem that they are applied. In this work, we propose a different treatment of the constraints: we induce instabilities in the evolving population, in a way that infeasible solution cannot survive as they are. Preliminary results are presented in a set of well known from the literature constrained optimization problems.

  15. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  16. Parallel optimization algorithm for drone inspection in the building industry

    NASA Astrophysics Data System (ADS)

    Walczyński, Maciej; BoŻejko, Wojciech; Skorupka, Dariusz

    2017-07-01

    In this paper we present an approach for Vehicle Routing Problem with Drones (VRPD) in case of building inspection from the air. In autonomic inspection process there is a need to determine of the optimal route for inspection drone. This is especially important issue because of the very limited flight time of modern multicopters. The method of determining solutions for Traveling Salesman Problem(TSP), described in this paper bases on Parallel Evolutionary Algorithm (ParEA)with cooperative and independent approach for communication between threads. This method described first by Bożejko and Wodecki [1] bases on the observation that if exists some number of elements on certain positions in a number of permutations which are local minima, then those elements will be in the same position in the optimal solution for TSP problem. Numerical experiments were made on BEM computational cluster with using MPI library.

  17. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  18. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  19. An optimized algorithm for detecting and annotating regional differential methylation

    PubMed Central

    2013-01-01

    Background DNA methylation profiling reveals important differentially methylated regions (DMRs) of the genome that are altered during development or that are perturbed by disease. To date, few programs exist for regional analysis of enriched or whole-genome bisulfate conversion sequencing data, even though such data are increasingly common. Here, we describe an open-source, optimized method for determining empirically based DMRs (eDMR) from high-throughput sequence data that is applicable to enriched whole-genome methylation profiling datasets, as well as other globally enriched epigenetic modification data. Results Here we show that our bimodal distribution model and weighted cost function for optimized regional methylation analysis provides accurate boundaries of regions harboring significant epigenetic modifications. Our algorithm takes the spatial distribution of CpGs into account for the enrichment assay, allowing for optimization of the definition of empirical regions for differential methylation. Combined with the dependent adjustment for regional p-value combination and DMR annotation, we provide a method that may be applied to a variety of datasets for rapid DMR analysis. Our method classifies both the directionality of DMRs and their genome-wide distribution, and we have observed that shows clinical relevance through correct stratification of two Acute Myeloid Leukemia (AML) tumor sub-types. Conclusions Our weighted optimization algorithm eDMR for calling DMRs extends an established DMR R pipeline (methylKit) and provides a needed resource in epigenomics. Our method enables an accurate and scalable way of finding DMRs in high-throughput methylation sequencing experiments. eDMR is available for download at http://code.google.com/p/edmr/. PMID:23735126

  20. An optimized algorithm for detecting and annotating regional differential methylation.

    PubMed

    Li, Sheng; Garrett-Bakelman, Francine E; Akalin, Altuna; Zumbo, Paul; Levine, Ross; To, Bik L; Lewis, Ian D; Brown, Anna L; D'Andrea, Richard J; Melnick, Ari; Mason, Christopher E

    2013-01-01

    DNA methylation profiling reveals important differentially methylated regions (DMRs) of the genome that are altered during development or that are perturbed by disease. To date, few programs exist for regional analysis of enriched or whole-genome bisulfate conversion sequencing data, even though such data are increasingly common. Here, we describe an open-source, optimized method for determining empirically based DMRs (eDMR) from high-throughput sequence data that is applicable to enriched whole-genome methylation profiling datasets, as well as other globally enriched epigenetic modification data. Here we show that our bimodal distribution model and weighted cost function for optimized regional methylation analysis provides accurate boundaries of regions harboring significant epigenetic modifications. Our algorithm takes the spatial distribution of CpGs into account for the enrichment assay, allowing for optimization of the definition of empirical regions for differential methylation. Combined with the dependent adjustment for regional p-value combination and DMR annotation, we provide a method that may be applied to a variety of datasets for rapid DMR analysis. Our method classifies both the directionality of DMRs and their genome-wide distribution, and we have observed that shows clinical relevance through correct stratification of two Acute Myeloid Leukemia (AML) tumor sub-types. Our weighted optimization algorithm eDMR for calling DMRs extends an established DMR R pipeline (methylKit) and provides a needed resource in epigenomics. Our method enables an accurate and scalable way of finding DMRs in high-throughput methylation sequencing experiments. eDMR is available for download at http://code.google.com/p/edmr/.