Science.gov

Sample records for evolutionary model-based algorithm

  1. Multi-Dimensional Scaling and MODELLER-Based Evolutionary Algorithms for Protein Model Refinement

    PubMed Central

    Chen, Yan; Shang, Yi; Xu, Dong

    2015-01-01

    Protein structure prediction, i.e., computationally predicting the three-dimensional structure of a protein from its primary sequence, is one of the most important and challenging problems in bioinformatics. Model refinement is a key step in the prediction process, where improved structures are constructed based on a pool of initially generated models. Since the refinement category was added to the biennial Critical Assessment of Structure Prediction (CASP) in 2008, CASP results show that it is a challenge for existing model refinement methods to improve model quality consistently. This paper presents three evolutionary algorithms for protein model refinement, in which multidimensional scaling(MDS), the MODELLER software, and a hybrid of both are used as crossover operators, respectively. The MDS-based method takes a purely geometrical approach and generates a child model by combining the contact maps of multiple parents. The MODELLER-based method takes a statistical and energy minimization approach, and uses the remodeling module in MODELLER program to generate new models from multiple parents. The hybrid method first generates models using the MDS-based method and then run them through the MODELLER-based method, aiming at combining the strength of both. Promising results have been obtained in experiments using CASP datasets. The MDS-based method improved the best of a pool of predicted models in terms of the global distance test score (GDT-TS) in 9 out of 16test targets. PMID:25844403

  2. Multi-Dimensional Scaling and MODELLER-Based Evolutionary Algorithms for Protein Model Refinement.

    PubMed

    Chen, Yan; Shang, Yi; Xu, Dong

    2014-07-01

    Protein structure prediction, i.e., computationally predicting the three-dimensional structure of a protein from its primary sequence, is one of the most important and challenging problems in bioinformatics. Model refinement is a key step in the prediction process, where improved structures are constructed based on a pool of initially generated models. Since the refinement category was added to the biennial Critical Assessment of Structure Prediction (CASP) in 2008, CASP results show that it is a challenge for existing model refinement methods to improve model quality consistently. This paper presents three evolutionary algorithms for protein model refinement, in which multidimensional scaling(MDS), the MODELLER software, and a hybrid of both are used as crossover operators, respectively. The MDS-based method takes a purely geometrical approach and generates a child model by combining the contact maps of multiple parents. The MODELLER-based method takes a statistical and energy minimization approach, and uses the remodeling module in MODELLER program to generate new models from multiple parents. The hybrid method first generates models using the MDS-based method and then run them through the MODELLER-based method, aiming at combining the strength of both. Promising results have been obtained in experiments using CASP datasets. The MDS-based method improved the best of a pool of predicted models in terms of the global distance test score (GDT-TS) in 9 out of 16test targets.

  3. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  4. The evolutionary forest algorithm.

    PubMed

    Leman, Scotland C; Uyenoyama, Marcy K; Lavine, Michael; Chen, Yuguo

    2007-08-01

    Gene genealogies offer a powerful context for inferences about the evolutionary process based on presently segregating DNA variation. In many cases, it is the distribution of population parameters, marginalized over the effectively infinite-dimensional tree space, that is of interest. Our evolutionary forest (EF) algorithm uses Monte Carlo methods to generate posterior distributions of population parameters. A novel feature is the updating of parameter values based on a probability measure defined on an ensemble of histories (a forest of genealogies), rather than a single tree. The EF algorithm generates samples from the correct marginal distribution of population parameters. Applied to actual data from closely related fruit fly species, it rapidly converged to posterior distributions that closely approximated the exact posteriors generated through massive computational effort. Applied to simulated data, it generated credible intervals that covered the actual parameter values in accordance with the nominal probabilities. A C++ implementation of this method is freely accessible at http://www.isds.duke.edu/~scl13

  5. A novel model-based evolutionary algorithm for multi-objective deformable image registration with content mismatch and large deformations: benchmarking efficiency and quality

    NASA Astrophysics Data System (ADS)

    Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.

    2017-02-01

    Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.

  6. Model Based Filtered Backprojection Algorithm: A Tutorial

    PubMed Central

    2014-01-01

    Purpose People have been wandering for a long time whether a filtered backprojection (FBP) algorithm is able to incorporate measurement noise in image reconstruction. The purpose of this tutorial is to develop such an FBP algorithm that is able to minimize an objective function with an embedded noise model. Methods An objective function is first set up to model measurement noise and to enforce some constraints so that the resultant image has some pre-specified properties. An iterative algorithm is used to minimize the objective function, and then the result of the iterative algorithm is converted into the Fourier domain, which in turn leads to an FBP algorithm. The model based FBP algorithm is almost the same as the conventional FBP algorithm, except for the filtering step. Results The model based FBP algorithm has been applied to low-dose x-ray CT, nuclear medicine, and real-time MRI applications. Compared with the conventional FBP algorithm, the model based FBP algorithm is more effective in reducing noise. Even though an iterative algorithm can achieve the same noise-reducing performance, the model based FBP algorithm is much more computationally efficient. Conclusions The model based FBP algorithm is an efficient and effective image reconstruction tool. In many applications, it can replace the state-of-the-art iterative algorithms, which usually have a heavy computational cost. The model based FBP algorithm is linear and it has advantages over a nonlinear iterative algorithm in parametric image reconstruction and noise analysis. PMID:25574421

  7. Evolutionary algorithms for the satisfiability problem.

    PubMed

    Gottlieb, Jens; Marchiori, Elena; Rossi, Claudio

    2002-01-01

    Several evolutionary algorithms have been proposed for the satisfiability problem. We review the solution representations suggested in literature and choose the most promising one - the bit string representation - for further evaluation. An empirical comparison on commonly used benchmarks is presented for the most successful evolutionary algorithms and for WSAT, a prominent local search algorithm for the satisfiability problem. The key features of successful evolutionary algorithms are identified, thereby providing useful methodological guidelines for designing new heuristics. Our results indicate that evolutionary algorithms are competitive to WSAT.

  8. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  9. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  10. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  11. Evolutionary algorithms dynamics represented by contact sequences

    NASA Astrophysics Data System (ADS)

    Skanderova, Lenka

    2017-07-01

    Population-based evolutionary algorithms are popular to solve difficult optimization problems in the various areas of research as well as real-world problems. In this work, we discuss a new approach of the analysis of the evolutionary algorithm which is based on the idea to represent the evolution of the population as the contact sequence.

  12. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  13. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  14. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  15. Infrastructure system restoration planning using evolutionary algorithms

    USGS Publications Warehouse

    Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.

    2016-01-01

    This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.

  16. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  17. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  18. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  19. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  20. A consensus opinion model based on the evolutionary game

    NASA Astrophysics Data System (ADS)

    Yang, Han-Xin

    2016-08-01

    We propose a consensus opinion model based on the evolutionary game. In our model, both of the two connected agents receive a benefit if they have the same opinion, otherwise they both pay a cost. Agents update their opinions by comparing payoffs with neighbors. The opinion of an agent with higher payoff is more likely to be imitated. We apply this model in scale-free networks with tunable degree distribution. Interestingly, we find that there exists an optimal ratio of cost to benefit, leading to the shortest consensus time. Qualitative analysis is obtained by examining the evolution of the opinion clusters. Moreover, we find that the consensus time decreases as the average degree of the network increases, but increases with the noise introduced to permit irrational choices. The dependence of the consensus time on the network size is found to be a power-law form. For small or larger ratio of cost to benefit, the consensus time decreases as the degree exponent increases. However, for moderate ratio of cost to benefit, the consensus time increases with the degree exponent. Our results may provide new insights into opinion dynamics driven by the evolutionary game theory.

  1. Evolutionary development of path planning algorithms

    SciTech Connect

    Hage, M

    1998-09-01

    This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.

  2. Synthesis of logic circuits with evolutionary algorithms

    SciTech Connect

    JONES,JAKE S.; DAVIDSON,GEORGE S.

    2000-01-26

    In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.

  3. Evolutionary Algorithm for Calculating Available Transfer Capability

    NASA Astrophysics Data System (ADS)

    Šošić, Darko; Škokljev, Ivan

    2013-09-01

    The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.

  4. An organizational evolutionary algorithm for numerical optimization.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2007-08-01

    Taking inspiration from the interacting process among organizations in human societies, this correspondence designs a kind of structured population and corresponding evolutionary operators to form a novel algorithm, Organizational Evolutionary Algorithm (OEA), for solving both unconstrained and constrained optimization problems. In OEA, a population consists of organizations, and an organization consists of individuals. All evolutionary operators are designed to simulate the interaction among organizations. In experiments, 15 unconstrained functions, 13 constrained functions, and 4 engineering design problems are used to validate the performance of OEA, and thorough comparisons are made between the OEA and the existing approaches. The results show that the OEA obtains good performances in both the solution quality and the computational cost. Moreover, for the constrained problems, the good performances are obtained by only incorporating two simple constraints handling techniques into the OEA. Furthermore, systematic analyses have been made on all parameters of the OEA. The results show that the OEA is quite robust and easy to use.

  5. Knowledge Guided Evolutionary Algorithms in Financial Investing

    ERIC Educational Resources Information Center

    Wimmer, Hayden

    2013-01-01

    A large body of literature exists on evolutionary computing, genetic algorithms, decision trees, codified knowledge, and knowledge management systems; however, the intersection of these computing topics has not been widely researched. Moving through the set of all possible solutions--or traversing the search space--at random exhibits no control…

  6. Knowledge Guided Evolutionary Algorithms in Financial Investing

    ERIC Educational Resources Information Center

    Wimmer, Hayden

    2013-01-01

    A large body of literature exists on evolutionary computing, genetic algorithms, decision trees, codified knowledge, and knowledge management systems; however, the intersection of these computing topics has not been widely researched. Moving through the set of all possible solutions--or traversing the search space--at random exhibits no control…

  7. Protein Structure Prediction with Evolutionary Algorithms

    SciTech Connect

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  8. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  9. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  10. Model-based Hyperspectral Exploitation Algorithm Development

    DTIC Science & Technology

    2006-01-01

    pixels, and an iterative constrained optimization using generalized reduced gradients ( GRG ). Sample results are shown in Figure 5. Much progress has...in-water optical parameters from remote observations involved a non-linear optimization that required observations of several regions of interests...retrieval from long wave infrared airborne hyperspectral imagery. The optimized land surface temperature and emissivity retrieval (OLSTER) algorithm

  11. A Note on Evolutionary Algorithms and Its Applications

    ERIC Educational Resources Information Center

    Bhargava, Shifali

    2013-01-01

    This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.

  12. Evolutionary algorithms and multi-agent systems

    NASA Astrophysics Data System (ADS)

    Oh, Jae C.

    2006-05-01

    This paper discusses how evolutionary algorithms are related to multi-agent systems and the possibility of military applications using the two disciplines. In particular, we present a game theoretic model for multi-agent resource distribution and allocation where agents in the environment must help each other to survive. Each agent maintains a set of variables representing actual friendship and perceived friendship. The model directly addresses problems in reputation management schemes in multi-agent systems and Peer-to-Peer distributed systems. We present algorithms based on evolutionary game process for maintaining the friendship values as well as a utility equation used in each agent's decision making. For an application problem, we adapted our formal model to the military coalition support problem in peace-keeping missions. Simulation results show that efficient resource allocation and sharing with minimum communication cost is achieved without centralized control.

  13. Evolutionary approach for relative gene expression algorithms.

    PubMed

    Czajkowski, Marcin; Kretowski, Marek

    2014-01-01

    A Relative Expression Analysis (RXA) uses ordering relationships in a small collection of genes and is successfully applied to classiffication using microarray data. As checking all possible subsets of genes is computationally infeasible, the RXA algorithms require feature selection and multiple restrictive assumptions. Our main contribution is a specialized evolutionary algorithm (EA) for top-scoring pairs called EvoTSP which allows finding more advanced gene relations. We managed to unify the major variants of relative expression algorithms through EA and introduce weights to the top-scoring pairs. Experimental validation of EvoTSP on public available microarray datasets showed that the proposed solution significantly outperforms in terms of accuracy other relative expression algorithms and allows exploring much larger solution space.

  14. Evolutionary Approach for Relative Gene Expression Algorithms

    PubMed Central

    Czajkowski, Marcin

    2014-01-01

    A Relative Expression Analysis (RXA) uses ordering relationships in a small collection of genes and is successfully applied to classiffication using microarray data. As checking all possible subsets of genes is computationally infeasible, the RXA algorithms require feature selection and multiple restrictive assumptions. Our main contribution is a specialized evolutionary algorithm (EA) for top-scoring pairs called EvoTSP which allows finding more advanced gene relations. We managed to unify the major variants of relative expression algorithms through EA and introduce weights to the top-scoring pairs. Experimental validation of EvoTSP on public available microarray datasets showed that the proposed solution significantly outperforms in terms of accuracy other relative expression algorithms and allows exploring much larger solution space. PMID:24790574

  15. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  16. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  17. Predicting polymeric crystal structures by evolutionary algorithms.

    PubMed

    Zhu, Qiang; Sharma, Vinit; Oganov, Artem R; Ramprasad, Ramamurthy

    2014-10-21

    The recently developed evolutionary algorithm USPEX proved to be a tool that enables accurate and reliable prediction of structures. Here we extend this method to predict the crystal structure of polymers by constrained evolutionary search, where each monomeric unit is treated as a building block with fixed connectivity. This greatly reduces the search space and allows the initial structure generation with different sequences and packings of these blocks. The new constrained evolutionary algorithm is successfully tested and validated on a diverse range of experimentally known polymers, namely, polyethylene, polyacetylene, poly(glycolic acid), poly(vinyl chloride), poly(oxymethylene), poly(phenylene oxide), and poly (p-phenylene sulfide). By fixing the orientation of polymeric chains, this method can be further extended to predict the structures of complex linear polymers, such as all polymorphs of poly(vinylidene fluoride), nylon-6 and cellulose. The excellent agreement between predicted crystal structures and experimentally known structures assures a major role of this approach in the efficient design of the future polymeric materials.

  18. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  19. A theoretical comparison of evolutionary algorithms and simulated annealing

    SciTech Connect

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  20. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  1. Evolutionary algorithm for metabolic pathways synthesis.

    PubMed

    Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H

    2016-06-01

    Metabolic pathway building is an active field of research, necessary to understand and manipulate the metabolism of organisms. There are different approaches, mainly based on classical search methods, to find linear sequences of reactions linking two compounds. However, an important limitation of these methods is the exponential increase of search trees when a large number of compounds and reactions is considered. Besides, such models do not take into account all substrates for each reaction during the search, leading to solutions that lack biological feasibility in many cases. This work proposes a new evolutionary algorithm that allows searching not only linear, but also branched metabolic pathways, formed by feasible reactions that relate multiple compounds simultaneously. Tests performed using several sets of reactions show that this algorithm is able to find feasible linear and branched metabolic pathways. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  3. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  4. Barnacle cement: a polymerization model based on evolutionary concepts.

    PubMed

    Dickinson, Gary H; Vega, Irving E; Wahl, Kathryn J; Orihuela, Beatriz; Beyley, Veronica; Rodriguez, Eva N; Everett, Richard K; Bonaventura, Joseph; Rittschof, Daniel

    2009-11-01

    Enzymes and biochemical mechanisms essential to survival are under extreme selective pressure and are highly conserved through evolutionary time. We applied this evolutionary concept to barnacle cement polymerization, a process critical to barnacle fitness that involves aggregation and cross-linking of proteins. The biochemical mechanisms of cement polymerization remain largely unknown. We hypothesized that this process is biochemically similar to blood clotting, a critical physiological response that is also based on aggregation and cross-linking of proteins. Like key elements of vertebrate and invertebrate blood clotting, barnacle cement polymerization was shown to involve proteolytic activation of enzymes and structural precursors, transglutaminase cross-linking and assembly of fibrous proteins. Proteolytic activation of structural proteins maximizes the potential for bonding interactions with other proteins and with the surface. Transglutaminase cross-linking reinforces cement integrity. Remarkably, epitopes and sequences homologous to bovine trypsin and human transglutaminase were identified in barnacle cement with tandem mass spectrometry and/or western blotting. Akin to blood clotting, the peptides generated during proteolytic activation functioned as signal molecules, linking a molecular level event (protein aggregation) to a behavioral response (barnacle larval settlement). Our results draw attention to a highly conserved protein polymerization mechanism and shed light on a long-standing biochemical puzzle. We suggest that barnacle cement polymerization is a specialized form of wound healing. The polymerization mechanism common between barnacle cement and blood may be a theme for many marine animal glues.

  5. Evolutionary algorithm for vehicle driving cycle generation.

    PubMed

    Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott

    2011-09-01

    Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.

  6. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-06-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  7. Superelement model based parallel algorithm for vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Agrawal, O. P.; Danhof, K. J.; Kumar, R.

    1994-05-01

    This paper presents a superelement model based parallel algorithm for a planar vehicle dynamics. The vehicle model is made up of a chassis and two suspension systems each of which consists of an axle-wheel assembly and two trailing arms. In this model, the chassis is treated as a Cartesian element and each suspension system is treated as a superelement. The parameters associated with the superelements are computed using an inverse dynamics technique. Suspension shock absorbers and the tires are modeled by nonlinear springs and dampers. The Euler-Lagrange approach is used to develop the system equations of motion. This leads to a system of differential and algebraic equations in which the constraints internal to superelements appear only explicitly. The above formulation is implemented on a multiprocessor machine. The numerical flow chart is divided into modules and the computation of several modules is performed in parallel to gain computational efficiency. In this implementation, the master (parent processor) creates a pool of slaves (child processors) at the beginning of the program. The slaves remain in the pool until they are needed to perform certain tasks. Upon completion of a particular task, a slave returns to the pool. This improves the overall response time of the algorithm. The formulation presented is general which makes it attractive for a general purpose code development. Speedups obtained in the different modules of the dynamic analysis computation are also presented. Results show that the superelement model based parallel algorithm can significantly reduce the vehicle dynamics simulation time.

  8. Evolutionary algorithm and modularity for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Bilal, Saoud; Abdelouahab, Moussaoui

    2017-05-01

    Evolutionary algorithms are very used today to resolve problems in many fields. There are few community detection methods in networks based on evolutionary algorithms. In our paper, we develop a new approach of community detection in networks based on evolutionary algorithm. In this approach we use an evolutionary algorithm to find the first community structure that maximizes the modularity. After that we improve the community structure through merging communities to find the final community structure that has the high value of modularity. We provide a general framework for implementing our approach. Compared with the state of art algorithms, simulation results on computer-generated and real world networks reflect the effectiveness of our approach.

  9. Comparing evolutionary strategies on a biobjective cultural algorithm.

    PubMed

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Soto, Ricardo; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.

  10. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    PubMed Central

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257

  11. A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms.

    PubMed

    Díaz-Manríquez, Alan; Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class.

  12. A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms

    PubMed Central

    Díaz-Manríquez, Alan; Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class. PMID:27382366

  13. Evolutionary algorithms for multiobjective and multimodal optimization of diagnostic schemes.

    PubMed

    de Toro, Francisco; Ros, Eduardo; Mota, Sonia; Ortega, Julio

    2006-02-01

    This paper addresses the optimization of noninvasive diagnostic schemes using evolutionary algorithms in medical applications based on the interpretation of biosignals. A general diagnostic methodology using a set of definable characteristics extracted from the biosignal source followed by the specific diagnostic scheme is presented. In this framework, multiobjective evolutionary algorithms are used to meet not only classification accuracy but also other objectives of medical interest, which can be conflicting. Furthermore, the use of both multimodal and multiobjective evolutionary optimization algorithms provides the medical specialist with different alternatives for configuring the diagnostic scheme. Some application examples of this methodology are described in the diagnosis of a specific cardiac disorder-paroxysmal atrial fibrillation.

  14. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  15. Hybrid Tuning of an Evolutionary Algorithm for Sensor Allocation

    DTIC Science & Technology

    2011-06-01

    survey of tuning methods for evolutionary algorithms can be found in [10] where algorith - mic and search approaches are distinguished. The main charac...Yilmaz, B. N. Mcquay, H. Yu, A. S. Wu, and J. C. Sciortino, “Evolving sensor suites for enemy radar detection,” in Genetic and Evolutionary Computation...GECCO, 2003. [3] T. Shima and C. Schumacher, “Assigning cooperating uavs to simulta- neous tasks on consecutive targets using genetic algorithms

  16. Structure and stability prediction of compounds with evolutionary algorithms.

    PubMed

    Revard, Benjamin C; Tipton, William W; Hennig, Richard G

    2014-01-01

    Crystal structure prediction is a long-standing challenge in the physical sciences. In recent years, much practical success has been had by framing it as a global optimization problem, leveraging the existence of increasingly robust and accurate free energy calculations. This optimization problem has often been solved using evolutionary algorithms (EAs). However, many choices are possible when designing an EA for structure prediction, and innovation in the field is ongoing. We review the current state of evolutionary algorithms for crystal structure and composition prediction and discuss the details of methodological and algorithmic choices. Finally, we review the application of these algorithms to many systems of practical and fundamental scientific interest.

  17. Speeding up evolutionary algorithms through asymmetric mutation operators.

    PubMed

    Doerr, Benjamin; Hebbinghaus, Nils; Neumann, Frank

    2007-01-01

    Successful applications of evolutionary algorithms show that certain variation operators can lead to good solutions much faster than other ones. We examine this behavior observed in practice from a theoretical point of view and investigate the effect of an asymmetric mutation operator in evolutionary algorithms with respect to the runtime behavior. Considering the Eulerian cycle problem we present runtime bounds for evolutionary algorithms using an asymmetric operator which are much smaller than the best upper bounds for a more general one. In our analysis it turns out that a plateau which both algorithms have to cope with changes its structure in a way that allows the algorithm to obtain an improvement much faster. In addition, we present a lower bound for the general case which shows that the asymmetric operator speeds up computation by at least a linear factor.

  18. A nonlinear regression model-based predictive control algorithm.

    PubMed

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  19. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  20. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Using Evolutionary Algorithms to Induce Oblique Decision Trees

    SciTech Connect

    Cantu-Paz, E.; Kamath, C.

    2000-01-21

    This paper illustrates the application of evolutionary algorithms (EAs) to the problem of oblique decision tree induction. The objectives are to demonstrate that EAs can find classifiers whose accuracy is competitive with other oblique tree construction methods, and that this can be accomplished in a shorter time. Experiments were performed with a (1+1) evolutionary strategy and a simple genetic algorithm on public domain and artificial data sets. The empirical results suggest that the EAs quickly find Competitive classifiers, and that EAs scale up better than traditional methods to the dimensionality of the domain and the number of training instances.

  2. Evolutionary Algorithm Based Automated Reverse Engineering and Defect Discovery

    DTIC Science & Technology

    2007-09-21

    A data mining based procedure for automated reverse engineering and defect discovery has been developed. The data mining algorithm for reverse...engineering uses a genetic program (GP) as a data mining function. A GP is an evolutionary algorithm that automatically evolves populations of computer...are used to create a fitness function for the GP, allowing GP-based data mining . This procedure incorporates not only the experts’ rules into the

  3. Active Processor Scheduling Using Evolutionary Algorithms

    DTIC Science & Technology

    2002-12-01

    found a niche for themselves in solving NP-complete problems [73], including the protein structure prediction problem [43], wire antenna designs [76] [35...Appendix D. Coevolutionary Algorithms Coevolution in biology is the interaction of multiple species within their environment. This principle of...Peter J. Angeline, et al. 125–136. Indianapolis, Indiana, USA: Springer-Verlag, 1997. 43. Krasnogor, Natalio, et al. “ Protein Structure Prediction

  4. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    PubMed Central

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  6. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  7. An efficient non-dominated sorting method for evolutionary algorithms.

    PubMed

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  8. Models based on "out-of Kilter" algorithm

    NASA Astrophysics Data System (ADS)

    Adler, M. J.; Drobot, R.

    2012-04-01

    In case of many water users along the river stretches, it is very important, in case of low flows and droughty periods to develop an optimization model for water allocation, to cover all needs under certain predefined constraints, depending of the Contingency Plan for drought management. Such a program was developed during the implementation of the WATMAN Project, in Romania (WATMAN Project, 2005-2006, USTDA) for Arges-Dambovita-Ialomita Basins water transfers. This good practice was proposed for WATER CoRe Project- Good Practice Handbook for Drought Management, (InterregIVC, 2011), to be applied for the European Regions. Two types of simulation-optimization models based on an improved version of out-of-kilter algorithm as optimization technique have been developed and used in Romania: • models for founding of the short-term operation of a WMS, • models generically named SIMOPT that aim to the analysis of long-term WMS operation and have as the main results the statistical WMS functional parameters. A real WMS is modeled by an arcs-nodes network so the real WMS operation problem becomes a problem of flows in networks. The nodes and oriented arcs as well as their characteristics such as lower and upper limits and associated costs are the direct analog of the physical and operational WMS characteristics. Arcs represent both physical and conventional elements of WMS such as river branches, channels or pipes, water user demands or other water management requirements, trenches of water reservoirs volumes, water levels in channels or rivers, nodes are junctions of at least two arcs and stand for locations of lakes or water reservoirs and/or confluences of river branches, water withdrawal or wastewater discharge points, etc. Quantitative features of water resources, water users and water reservoirs or other water works are expressed as constraints of non-violating the lower and upper limits assigned on arcs. Options of WMS functioning i.e. water retention/discharge in

  9. A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.

    PubMed

    Li, Shan; Kang, Liying; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  10. General upper bounds on the runtime of parallel evolutionary algorithms.

    PubMed

    Lässig, Jörg; Sudholt, Dirk

    2014-01-01

    We present a general method for analyzing the runtime of parallel evolutionary algorithms with spatially structured populations. Based on the fitness-level method, it yields upper bounds on the expected parallel runtime. This allows for a rigorous estimate of the speedup gained by parallelization. Tailored results are given for common migration topologies: ring graphs, torus graphs, hypercubes, and the complete graph. Example applications for pseudo-Boolean optimization show that our method is easy to apply and that it gives powerful results. In our examples the performance guarantees improve with the density of the topology. Surprisingly, even sparse topologies such as ring graphs lead to a significant speedup for many functions while not increasing the total number of function evaluations by more than a constant factor. We also identify which number of processors lead to the best guaranteed speedups, thus giving hints on how to parameterize parallel evolutionary algorithms.

  11. Available Transfer Capability Determination Using Hybrid Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Jirapong, Peeraool; Ongsakul, Weerakorn

    2008-10-01

    This paper proposes a new hybrid evolutionary algorithm (HEA) based on evolutionary programming (EP), tabu search (TS), and simulated annealing (SA) to determine the available transfer capability (ATC) of power transactions between different control areas in deregulated power systems. The optimal power flow (OPF)-based ATC determination is used to evaluate the feasible maximum ATC value within real and reactive power generation limits, line thermal limits, voltage limits, and voltage and angle stability limits. The HEA approach simultaneously searches for real power generations except slack bus in a source area, real power loads in a sink area, and generation bus voltages to solve the OPF-based ATC problem. Test results on the modified IEEE 24-bus reliability test system (RTS) indicate that ATC determination by the HEA could enhance ATC far more than those from EP, TS, hybrid TS/SA, and improved EP (IEP) algorithms, leading to an efficient utilization of the existing transmission system.

  12. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    PubMed Central

    Li, Shan; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969

  13. Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.

    PubMed

    Yamada, N; Nishikawa, T

    2010-06-21

    In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.

  14. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  15. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  16. Evolutionary algorithms applied to reliable communication network design

    NASA Astrophysics Data System (ADS)

    Nesmachnow, Sergio; Cancela, Hector; Alba, Enrique

    2007-10-01

    Several evolutionary algorithms (EAs) applied to a wide class of communication network design problems modelled under the generalized Steiner problem (GSP) are evaluated. In order to provide a fault-tolerant design, a solution to this problem consists of a preset number of independent paths linking each pair of potentially communicating terminal nodes. This usually requires considering intermediate non-terminal nodes (Steiner nodes), which are used to ensure path redundancy, while trying to minimize the overall cost. The GSP is an NP-hard problem for which few algorithms have been proposed. This article presents a comparative study of pure and hybrid EAs applied to the GSP, codified over MALLBA, a general purpose library for combinatorial optimization. The algorithms were tested on several GSPs, and asset efficient numerical results are reported for both serial and distributed models of the evaluated algorithms.

  17. Receiver diversity combining using evolutionary algorithms in Rayleigh fading channel.

    PubMed

    Akbari, Mohsen; Manesh, Mohsen Riahi; El-Saleh, Ayman A; Reza, Ahmed Wasif

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods.

  18. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  19. Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application

    SciTech Connect

    Hart, W.E.

    1999-02-10

    Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and they suggest that EPSAs may be more robust on larger, more complex problems.

  20. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  1. An evolutionary algorithm for large traveling salesman problems.

    PubMed

    Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan

    2004-08-01

    This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.

  2. Multi-objective Job Shop Rescheduling with Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Xinchang; Gen, Mitsuo

    In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).

  3. On replacement strategies in steady state evolutionary algorithms.

    PubMed

    Smith, Jim

    2007-01-01

    Steady State models of Evolutionary Algorithms are widely used, yet surprisingly little attention has been paid to the effects arising from different replacement strategies. This paper explores the use of mathematical models to characterise the selection pressures arising in a selection-only environment. The first part brings together models for the behaviour of seven different replacement mechanisms and provides expressions for various proposed indicators of Evolutionary Algorithm behaviour. Some of these have been derived elsewhere, and are included for completeness, but the majority are new to this paper. These theoretical indicators are used to compare the behaviour of the different strategies. The second part of this paper examines the practical relevance of these indicators as predictors for algorithms' relative performance in terms of optimisation time and reliability. It is not the intention of this paper to come up with a "one size fits all" recommendation for choice of replacement strategy. Although some strategies may have little to recommend them, the relative ranking of others is shown to depend on the intended use of the algorithm to be implemented, as reflected in the choice of performance metrics.

  4. Reservoir operation using a robust evolutionary optimization algorithm.

    PubMed

    Al-Jawad, Jafar Y; Tanyimboh, Tiku T

    2017-04-06

    In this research, a significant improvement in reservoir operation was achieved using a state-of-the-art evolutionary algorithm named Borg MOEA. A real-world multipurpose dam was used to test the algorithm's performance, and the target of the reservoir operation policy was to fulfil downstream water demands in drought condition while maintaining a sustainable quantity of water in the reservoir for the next year. The reservoir's performance was improved by increasing the maximum reservoir storage by 14.83 million m(3). Furthermore, sustainable water storage in the reservoir was achieved for the next year, for the simulated low flow condition considered, while the total annual imbalance between the monthly reservoir releases and water demands was reduced by 64.7%. The algorithm converged quickly and reliably, and consistently good results were obtained. The methodology and results will be useful to decision makers and water managers for setting the policy to manage the reservoir efficiently and sustainably.

  5. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  6. Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography.

    PubMed

    Picek, Stjepan; Carlet, Claude; Guilley, Sylvain; Miller, Julian F; Jakobovic, Domagoj

    2016-01-01

    The role of Boolean functions is prominent in several areas including cryptography, sequences, and coding theory. Therefore, various methods for the construction of Boolean functions with desired properties are of direct interest. New motivations on the role of Boolean functions in cryptography with attendant new properties have emerged over the years. There are still many combinations of design criteria left unexplored and in this matter evolutionary computation can play a distinct role. This article concentrates on two scenarios for the use of Boolean functions in cryptography. The first uses Boolean functions as the source of the nonlinearity in filter and combiner generators. Although relatively well explored using evolutionary algorithms, it still presents an interesting goal in terms of the practical sizes of Boolean functions. The second scenario appeared rather recently where the objective is to find Boolean functions that have various orders of the correlation immunity and minimal Hamming weight. In both these scenarios we see that evolutionary algorithms are able to find high-quality solutions where genetic programming performs the best.

  7. Comparison of Model-Based Segmentation Algorithms for Color Images.

    DTIC Science & Technology

    1987-03-01

    image. Hunt and Kubler [Ref. 3] found that for image restoration, Karhunen-Loive transformation followed by single channel image processing worked...Algorithm for Segmentation of Multichannel Images. M.S.Thesis, Naval Postgraduate School, Monterey, CaliFornia, December 1993. 3. Hunt, B.R., Kubler 0

  8. Evolving connectivity between genetic oscillators and switches using evolutionary algorithms.

    PubMed

    Thomas, Spencer Angus; Jin, Yaochu

    2013-06-01

    Although hypothesised there has been little investigation into how complex gene regulatory networks can evolve from simple regulatory motifs through modularisation, duplication and specialisation processes. In order to simulate natural evolution in a computational environment we evolve the connection between a genetic oscillator and a toggle switch motif using an evolutionary algorithm. We observe a connectivity preference between the motifs that is dependent on the coupling arrangement rather than on objective set-up. In addition, our results indicate the existence of a threshold in the connection parameters for the resulting dynamics for a specific coupling arrangement and objective set-up. We demonstrate that simple motifs can successfully be coupled through artificial evolution to form more complex, modular regulatory networks. These findings support, in principle, the above-mentioned hypothesis on evolutionary mechanisms in biological systems.

  9. [A new algorithm for NIR modeling based on manifold learning].

    PubMed

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  10. Comparison of evolutionary algorithms in gene regulatory network model inference

    PubMed Central

    2010-01-01

    Background The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. Results This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. Conclusions Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established. PMID:20105328

  11. Comparison of evolutionary algorithms in gene regulatory network model inference.

    PubMed

    Sîrbu, Alina; Ruskin, Heather J; Crane, Martin

    2010-01-27

    The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  12. The algorithmic anatomy of model-based evaluation.

    PubMed

    Daw, Nathaniel D; Dayan, Peter

    2014-11-05

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review.

  13. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  14. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  15. Score-based resampling method for evolutionary algorithms.

    PubMed

    Park, Jonghwan; Jeon, Moongu; Pedrycz, Witold

    2008-10-01

    In this paper, a gene-handling method for evolutionary algorithms (EAs) is proposed. Such algorithms are characterized by a nonanalytic optimization process when dealing with complex systems as multiple behavioral responses occur in the realization of intelligent tasks. In generic EAs which optimize internal parameters of a given system, evaluation and selection are performed at the chromosome level. When a survived chromosome includes noneffective genes, the solution can be trapped in a local optimum during evolution, which causes an increase in the uncertainty of the results and reduces the quality of the overall system. This phenomenon also results in an unbalanced performance of partial behaviors. To alleviate this problem, a score-based resampling method is proposed, where a score function of a gene is introduced as a criterion of handling genes in each allele. The proposed method was empirically evaluated with various test functions, and the results show its effectiveness.

  16. Using Entropy for Parameter Analysis of Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Smit, Selmar K.; Eiben, Agoston E.

    Evolutionary algorithms (EA) form a rich class of stochastic search methods that share the basic principles of incrementally improving the quality of a set of candidate solutions by means of variation and selection (Eiben and Smith 2003, De Jong 2006). Such variation and selection operators often require parameters to be specified. Finding a good set of parameter values is a nontrivial problem in itself. Furthermore, some EA parameters are more relevant than others in the sense that choosing different values for them affects EA performance more than for the other parameters. In this chapter we explain the notion of entropy and discuss how entropy can disclose important information about EA parameters, in particular, about their relevance. We describe an algorithm that is able to estimate the entropy of EA parameters and we present a case study, based on extensive experimentation, to demonstrate the usefulness of this approach and some interesting insights that are gained.

  17. Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.

    PubMed

    Friedrich, Tobias; Neumann, Frank

    2015-01-01

    Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.

  18. Sequence-Based Deterministic Initialization for Evolutionary Algorithms.

    PubMed

    Elsayed, Saber; Sarker, Ruhul; Coello Coello, Carlos A

    2016-12-01

    It is well known that the performances of evolutionary algorithms are influenced by the quality of their initial populations. Over the years, many different techniques for generating an initial population by uniformly covering as much of the search space as possible have been proposed. However, none of these approaches considers any input from the function that must be evolved using that population. In this paper, a new initialization technique, which can be considered a heuristic space-filling approach, based on both function to be optimized and search space, is proposed. It was tested on two well-known unconstrained sets of benchmark problems using several computational intelligence algorithms. The results obtained reflected its benefits as the performances of all these algorithms were significantly improved compared with those of the same algorithms with currently available initialization techniques. The new technique also proved its capability to provide useful information about the function's behavior and, for some test problems, the initial population produced high-quality solutions. This method was also tested on a few multiobjective problems, with the results demonstrating its benefits.

  19. Multimodal optimization using a bi-objective evolutionary algorithm.

    PubMed

    Deb, Kalyanmoy; Saha, Amit

    2012-01-01

    In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.

  20. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    NASA Astrophysics Data System (ADS)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  1. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  2. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    PubMed

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  3. A hierarchical evolutionary algorithm for multiobjective optimization in IMRT

    PubMed Central

    Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H.

    2010-01-01

    Purpose: The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. Results: The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12–15 plans, any random plan selected from a MOEA

  4. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  5. Optimum oil production planning using infeasibility driven evolutionary algorithm.

    PubMed

    Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul

    2013-01-01

    In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.

  6. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    PubMed

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2016-12-13

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this paper, we reveal that the (1 + 1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime O(r(r + s - 1) ⋅ wmax), where r, s and wmax are respectively the number of Steiner nodes, the number of special nodes and the largest weight among all edges in the input graph. We also show that the (1 + 1) EA is better than two other heuristics on two GSTP instances, and the (1 + 1) EA may be inefficient on a constructed GSTP instance.

  7. The hierarchical fair competition (HFC) framework for sustainable evolutionary algorithms.

    PubMed

    Hu, Jianjun; Goodman, Erik; Seo, Kisung; Fan, Zhun; Rosenberg, Rondal

    2005-01-01

    Many current Evolutionary Algorithms (EAs) suffer from a tendency to converge prematurely or stagnate without progress for complex problems. This may be due to the loss of or failure to discover certain valuable genetic material or the loss of the capability to discover new genetic material before convergence has limited the algorithm's ability to search widely. In this paper, the Hierarchical Fair Competition (HFC) model, including several variants, is proposed as a generic framework for sustainable evolutionary search by transforming the convergent nature of the current EA framework into a non-convergent search process. That is, the structure of HFC does not allow the convergence of the population to the vicinity of any set of optimal or locally optimal solutions. The sustainable search capability of HFC is achieved by ensuring a continuous supply and the incorporation of genetic material in a hierarchical manner, and by culturing and maintaining, but continually renewing, populations of individuals of intermediate fitness levels. HFC employs an assembly-line structure in which subpopulations are hierarchically organized into different fitness levels, reducing the selection pressure within each subpopulation while maintaining the global selection pressure to help ensure the exploitation of the good genetic material found. Three EAs based on the HFC principle are tested - two on the even-10-parity genetic programming benchmark problem and a real-world analog circuit synthesis problem, and another on the HIFF genetic algorithm (GA) benchmark problem. The significant gain in robustness, scalability and efficiency by HFC, with little additional computing effort, and its tolerance of small population sizes, demonstrates its effectiveness on these problems and shows promise of its potential for improving other existing EAs for difficult problems. A paradigm shift from that of most EAs is proposed: rather than trying to escape from local optima or delay convergence at a

  8. GX-Means: A model-based divide and merge algorithm for geospatial image clustering

    SciTech Connect

    Vatsavai, Raju; Symons, Christopher T; Chandola, Varun; Jun, Goo

    2011-01-01

    One of the practical issues in clustering is the specification of the appropriate number of clusters, which is not obvious when analyzing geospatial datasets, partly because they are huge (both in size and spatial extent) and high dimensional. In this paper we present a computationally efficient model-based split and merge clustering algorithm that incrementally finds model parameters and the number of clusters. Additionally, we attempt to provide insights into this problem and other data mining challenges that are encountered when clustering geospatial data. The basic algorithm we present is similar to the G-means and X-means algorithms; however, our proposed approach avoids certain limitations of these well-known clustering algorithms that are pertinent when dealing with geospatial data. We compare the performance of our approach with the G-means and X-means algorithms. Experimental evaluation on simulated data and on multispectral and hyperspectral remotely sensed image data demonstrates the effectiveness of our algorithm.

  9. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy.

    PubMed

    Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui

    2016-01-01

    For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And H ε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  10. Weapon Release Scheduling from Multiple-Bay Aircraft using Multi-Objective Evolutionary Algorithms

    DTIC Science & Technology

    2005-03-01

    Morgan Kaufmann, San Mateo, CA, 1993. URL citeseer.ist.psu.edu/fang93promising.html. 32. Fogel, David. Introduction to evolutionary computation , chapter 1...Aircraft using Multi-Objective Evolutionary Algorithms THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...8 2.2.2 Schedule Metrics . . . . . . . . . . . . . . . . 9 2.2.3 Algorithms . . . . . . . . . . . . . . . . . . . 10 2.3 Evolutionary Computation

  11. The concept of ageing in evolutionary algorithms: Discussion and inspirations for human ageing.

    PubMed

    Dimopoulos, Christos; Papageorgis, Panagiotis; Boustras, George; Efstathiades, Christodoulos

    2017-02-04

    This paper discusses the concept of ageing as this applies to the operation of Evolutionary Algorithms, and examines its relationship to the concept of ageing as this is understood for human beings. Evolutionary Algorithms constitute a family of search algorithms which base their operation on an analogy from the evolution of species in nature. The paper initially provides the necessary knowledge on the operation of Evolutionary Algorithms, focusing on the use of ageing strategies during the implementation of the evolutionary process. Background knowledge on the concept of ageing, as this is defined scientifically for biological systems, is subsequently presented. Based on this information, the paper provides a comparison between the two ageing concepts, and discusses the philosophical inspirations which can be drawn for human ageing based on the operation of Evolutionary Algorithms.

  12. A multiagent evolutionary algorithm for constraint satisfaction problems.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2006-02-01

    With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.

  13. Evaluation of a commercial Model Based Iterative reconstruction algorithm in computed tomography.

    PubMed

    Paruccini, Nicoletta; Villa, Raffaele; Pasquali, Claudia; Spadavecchia, Chiara; Baglivi, Antonia; Crespi, Andrea

    2017-06-02

    Iterative reconstruction algorithms have been introduced in clinical practice to obtain dose reduction without compromising the diagnostic performance. To investigate the commercial Model Based IMR algorithm by means of patient dose and image quality, with standard Fourier and alternative metrics. A Catphan phantom, a commercial density phantom and a cylindrical water filled phantom were scanned both varying CTDIvol and reconstruction thickness. Images were then reconstructed with Filtered Back Projection and both statistical (iDose) and Model Based (IMR) Iterative reconstruction algorithms. Spatial resolution was evaluated with Modulation Transfer Function and Target Transfer Function. Noise reduction was investigated with Standard Deviation. Furthermore, its behaviour was analysed with 3D and 2D Noise Power Spectrum. Blur and Low Contrast Detectability were investigated. Patient dose indexes were collected and analysed. All results, related to image quality, have been compared to FBP standard reconstructions. Model Based IMR significantly improves Modulation Transfer Function with an increase between 12% and 64%. Target Transfer Function curves confirm this trend for high density objects, while Blur presents a sharpness reduction for low density details. Model Based IMR underlines a noise reduction between 44% and 66% and a variation in noise power spectrum behaviour. Low Contrast Detectability curves underline an averaged improvement of 35-45%; these results are compatible with an achievable reduction of 50% of CTDIvol. A dose reduction between 25% and 35% is confirmed by median values of CTDIvol. IMR produces an improvement in image quality and dose reduction. Copyright © 2017. Published by Elsevier Ltd.

  14. Optimizing quantum gas production by an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Lausch, T.; Hohmann, M.; Kindermann, F.; Mayer, D.; Schmidt, F.; Widera, A.

    2016-05-01

    We report on the application of an evolutionary algorithm (EA) to enhance performance of an ultra-cold quantum gas experiment. The production of a ^{87}rubidium Bose-Einstein condensate (BEC) can be divided into fundamental cooling steps, specifically magneto-optical trapping of cold atoms, loading of atoms to a far-detuned crossed dipole trap, and finally the process of evaporative cooling. The EA is applied separately for each of these steps with a particular definition for the feedback, the so-called fitness. We discuss the principles of an EA and implement an enhancement called differential evolution. Analyzing the reasons for the EA to improve, e.g., the atomic loading rates and increase the BEC phase-space density, yields an optimal parameter set for the BEC production and enables us to reduce the BEC production time significantly. Furthermore, we focus on how additional information about the experiment and optimization possibilities can be extracted and how the correlations revealed allow for further improvement. Our results illustrate that EAs are powerful optimization tools for complex experiments and exemplify that the application yields useful information on the dependence of these experiments on the optimized parameters.

  15. Courses of action for effects based operations using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Haider, Sajjad; Levis, Alexander H.

    2006-05-01

    This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.

  16. A Novel Diversity-Based Replacement Strategy for Evolutionary Algorithms.

    PubMed

    Segura, Carlos; Coello Coello, Carlos A; Segredo, Eduardo; Aguirre, Arturo Hernandez

    2016-12-01

    Premature convergence is one of the best-known drawbacks that affects the performance of evolutionary algorithms. An alternative for dealing with this problem is to explicitly try to maintain proper diversity. In this paper, a new replacement strategy that preserves useful diversity is presented. The novelty of our method is that it combines the idea of transforming a single-objective problem into a multiobjective one, by considering diversity as an explicit objective, with the idea of adapting the balance induced between exploration and exploitation to the various optimization stages. Specifically, in the initial phases, larger amounts of diversity are accepted. The diversity measure considered in this paper is based on calculating distances to the closest surviving individual. Analyses with a multimodal function better justify the design decisions and provide greater insight into the working operation of the proposal. Computational results with a packing problem that was proposed in a popular contest illustrate the usefulness of the proposal. The new method significantly improves on the best results known to date for this problem and compares favorably against a large number of state-of-the-art schemes.

  17. Multi-objective tag SNPs selection using evolutionary algorithms.

    PubMed

    Ting, Chuan-Kang; Lin, Wei-Ting; Huang, Yao-Ting

    2010-06-01

    Integrated analysis of single nucleotide polymorphisms (SNPs) and structure variations showed that the extent of linkage disequilibrium is common across different types of genetic variants. A subset of SNPs (called tag SNPs) is sufficient for capturing alleles of bi-allelic and even multi-allelic variants. However, accuracy and power of tag SNPs are affected by several factors, including genotyping failure, errors and tagging bias of certain alleles. In addition, different sets of tag SNPs should be selected for fulfilling requirements of various genotyping platforms and projects. This study formulates the problem of selecting tag SNPs into a four-objective optimization problem that minimizes the total amount of tag SNPs, maximizes tolerance for missing data, enlarges and balances detection power of each allele class. To resolve this problem, we propose evolutionary algorithms incorporated with greedy initialization to find non-dominated solutions considering all objectives simultaneously. This method provides users with great flexibility to extract different sets of tag SNPs for different platforms and scenarios (e.g. up to 100 tags and 10% missing rate). Compared to conventional methods, our method explores larger search space and requires shorter convergence time. Experimental results revealed strong and weak conflicts among these objectives. In particular, a small number of additional tag SNPs can provide sufficient tolerance and balanced power given the low missing and error rates of today's genotyping platforms. The software is freely available at Bioinformatics online and http://cilab.cs.ccu.edu.tw/service_dl.html.

  18. EVO—Evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Bahmann, Silvia; Kortus, Jens

    2013-06-01

    We present EVO—an evolution strategy designed for crystal structure search and prediction. The concept and main features of biological evolution such as creation of diversity and survival of the fittest have been transferred to crystal structure prediction. EVO successfully demonstrates its applicability to find crystal structures of the elements of the 3rd main group with their different spacegroups. For this we used the number of atoms in the conventional cell and multiples of it. Running EVO with different numbers of carbon atoms per unit cell yields graphite as the lowest energy structure as well as a diamond-like structure, both in one run. Our implementation also supports the search for 2D structures and was able to find a boron sheet with structural features so far not considered in literature. Program summaryProgram title: EVO Catalogue identifier: AEOZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 23488 No. of bytes in distributed program, including test data, etc.: 1830122 Distribution format: tar.gz Programming language: Python. Computer: No limitations known. Operating system: Linux. RAM: Negligible compared to the requirements of the electronic structure programs used Classification: 7.8. External routines: Quantum ESPRESSO (http://www.quantum-espresso.org/), GULP (https://projects.ivec.org/gulp/) Nature of problem: Crystal structure search is a global optimisation problem in 3N+3 dimensions where N is the number of atoms in the unit cell. The high dimensional search space is accompanied by an unknown energy landscape. Solution method: Evolutionary algorithms transfer the main features of biological evolution to use them in global searches. The combination of the "survival of the fittest" (deterministic) and the

  19. Fuzzy evolutionary algorithm to solve chromosomes conflict and its application to lecture schedule problems

    NASA Astrophysics Data System (ADS)

    Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari

    2016-02-01

    A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.

  20. Model-based evolutionary analysis: the natural history of phage-shock stress response.

    PubMed

    Huvet, Maxime; Toni, Tina; Tan, Hui; Jovanovic, Goran; Engl, Christoph; Buck, Martin; Stumpf, Michael P H

    2009-08-01

    The evolution of proteins is inseparably linked to their function. Because most biological processes involve a number of different proteins, it may become impossible to study the evolutionary properties of proteins in isolation. In the present article, we show how simple mechanistic models of biological processes can complement conventional comparative analyses of biological traits. We use the specific example of the phage-shock stress response, which has been well characterized in Escherichia coli, to elucidate patterns of gene sharing and sequence conservation across bacterial species.

  1. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    NASA Astrophysics Data System (ADS)

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-09-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising.

  2. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    PubMed Central

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-01-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156

  3. Evolutionary food web model based on body masses gives realistic networks with permanent species turnover

    NASA Astrophysics Data System (ADS)

    Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.

    2015-06-01

    The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed.

  4. Evolutionary food web model based on body masses gives realistic networks with permanent species turnover

    PubMed Central

    Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.

    2015-01-01

    The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed. PMID:26042870

  5. Evolutionary food web model based on body masses gives realistic networks with permanent species turnover.

    PubMed

    Allhoff, K T; Ritterskamp, D; Rall, B C; Drossel, B; Guill, C

    2015-06-04

    The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed.

  6. Can't See the Forest: Using an Evolutionary Algorithm to Produce an Animated Artwork

    NASA Astrophysics Data System (ADS)

    Trist, Karen; Ciesielski, Vic; Barile, Perry

    We describe an artist's journey of working with an evolutionary algorithm to create an artwork suitable for exhibition in a gallery. Software based on the evolutionary algorithm produces animations which engage the viewer with a target image slowly emerging from a random collection of greyscale lines. The artwork consists of a grid of movies of eucalyptus tree targets. Each movie resolves with different aesthetic qualities, tempo and energy. The artist exercises creative control by choice of target and values for evolutionary and drawing parameters.

  7. Optimization of a Quantum Cascade Laser Operating in the Terahertz Frequency Range Using a Multiobjective Evolutionary Algorithm

    DTIC Science & Technology

    2004-06-01

    Range Using A Multiobjective Evolutionary Algorithm 1. Introduction Half of the 2000 Nobel Prize in Physics was awarded to Zhores Alferov and Herbert...Representing the Structure of an Evolutionary Algorithm [57] 3.2.1 Genetic Algorithms. The introduction of genetic algorithms occurred in Adaptation in...Highly Reliable Communications Networks.”. 22. Eiben , A. E. Evolutionary exploration of the search spaces, 178–188. Springer-Verlag, 1996. 23. Esaki, L

  8. Why don’t you use Evolutionary Algorithms in Big Data?

    NASA Astrophysics Data System (ADS)

    Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga

    2017-02-01

    In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.

  9. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  10. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  11. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    DTIC Science & Technology

    2009-03-10

    Orlando, FL, November 15-19, 2009. 2. Optimizing Concentrations of Alloying Elements and Tempering of Corrosion Resistant Aluminum Alloys (with...Optimization of Corrosion Resistant Aluminum Alloys ", M.Sc. degree in Mechanical Engineering, Florida International University, Miami, FL, expected...International Journal of Thermophysical Properties Research. 5. Evolutionary Wavelet Neural Network for Multidimensional Function Estimation in

  12. Model-based neural network algorithm for coffee ripeness prediction using Helios UAV aerial images

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Ganapol, B. D.; Johnson, L. F.; Herwitz, S.

    2005-10-01

    Over the past few years, NASA has had a great interest in exploring the feasibility of using Unmanned Aerial Vehicles (UAVs), equipped with multi-spectral imaging systems, as long-duration platform for crop monitoring. To address the problem of predicting the ripeness level of the Kauai coffee plantation field using UAV aerial images, we proposed a neural network algorithm based on a nested Leaf-Canopy radiative transport Model (LCM2). A model-based, multi-layer neural network using backpropagation has been designed and trained to learn the functional relationship between the airborne reflectance and the percentage of ripe, over-ripe and under-ripe cherries present in the field. LCM2 was used to generate samples of the desired map. Post-processing analysis and tests on synthetic coffee field data showed that the network has accurately learn the map. A new Domain Projection Technique (DPT) was developed to deal with situations where the measured reflectance fell outside the training set. DPT projected the reflectance into the domain forcing the network to provide a physical solution. Tests were conducted to estimate the error bound. The synergistic combination of neural network algorithms and DPT lays at the core of a more complex algorithm designed to process UAV images. The application of the algorithm to real airborne images shows predictions consistent with post-harvesting data and highlights the potential of the overall methodology.

  13. A novel evolutionary approach for optimizing content-based image indexing algorithms.

    PubMed

    Saadatmand-Tarzjan, Mahdi; Moghaddam, Hamid Abrishami

    2007-02-01

    Optimization of content-based image indexing and retrieval (CBIR) algorithms is a complicated and time-consuming task since each time a parameter of the indexing algorithm is changed, all images in the database should be indexed again. In this paper, a novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms. In the new evolutionary algorithm, the image database is partitioned into several smaller subsets, and each subset is used by an updating process as training patterns for each chromosome during evolution. This is in contrast to genetic algorithms that use the whole database as training patterns for evolution. Additionally, for each chromosome, a parameter called age is defined that implies the progress of the updating process. Similarly, the genes of the proposed chromosomes are divided into two categories: evolutionary genes that participate to evolution and history genes that save previous states of the updating process. Furthermore, a new fitness function is defined which evaluates the fitness of the chromosomes of the current population with different ages in each generation. We used EGA to optimize the quantization thresholds of the wavelet-correlogram algorithm for CBIR. The optimal quantization thresholds computed by EGA improved significantly all the evaluation measures including average precision, average weighted precision, average recall, and average rank for the wavelet-correlogram method.

  14. Modeling the performance of evolutionary algorithms on the root identification problem: a case study with PBIL and CHC algorithms.

    PubMed

    Yeguas, Enrique; Joan-Arinyo, Robert; Victoria Luz N, Mar A

    2011-01-01

    The availability of a model to measure the performance of evolutionary algorithms is very important, especially when these algorithms are applied to solve problems with high computational requirements. That model would compute an index of the quality of the solution reached by the algorithm as a function of run-time. Conversely, if we fix an index of quality for the solution, the model would give the number of iterations to be expected. In this work, we develop a statistical model to describe the performance of PBIL and CHC evolutionary algorithms applied to solve the root identification problem. This problem is basic in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The performance model is empirically validated over a benchmark with very large search spaces.

  15. Estimating the ratios of the stationary distribution values for Markov chains modeling evolutionary algorithms.

    PubMed

    Mitavskiy, Boris; Cannings, Chris

    2009-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.

  16. A Parameterised Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms.

    PubMed

    Corus, Dogan; Lehre, Per Kristian; Neumann, Frank; Pourhassan, Mojgan

    2016-01-01

    Bi-level optimisation problems have gained increasing interest in the field of combinatorial optimisation in recent years. In this paper, we analyse the runtime of some evolutionary algorithms for bi-level optimisation problems. We examine two NP-hard problems, the generalised minimum spanning tree problem and the generalised travelling salesperson problem in the context of parameterised complexity. For the generalised minimum spanning tree problem, we analyse the two approaches presented by Hu and Raidl ( 2012 ) with respect to the number of clusters that distinguish each other by the chosen representation of possible solutions. Our results show that a (1+1) evolutionary algorithm working with the spanning nodes representation is not a fixed-parameter evolutionary algorithm for the problem, whereas the problem can be solved in fixed-parameter time with the global structure representation. We present hard instances for each approach and show that the two approaches are highly complementary by proving that they solve each other's hard instances very efficiently. For the generalised travelling salesperson problem, we analyse the problem with respect to the number of clusters in the problem instance. Our results show that a (1+1) evolutionary algorithm working with the global structure representation is a fixed-parameter evolutionary algorithm for the problem.

  17. Parameterized runtime analyses of evolutionary algorithms for the planar euclidean traveling salesperson problem.

    PubMed

    Sutton, Andrew M; Neumann, Frank; Nallaperuma, Samadhi

    2014-01-01

    Parameterized runtime analysis seeks to understand the influence of problem structure on algorithmic runtime. In this paper, we contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We investigate the structural properties in TSP instances that influence the optimization process of evolutionary algorithms and use this information to bound their runtime. We analyze the runtime in dependence of the number of inner points k. In the first part of the paper, we study a [Formula: see text] EA in a strictly black box setting and show that it can solve the Euclidean TSP in expected time [Formula: see text] where A is a function of the minimum angle [Formula: see text] between any three points. Based on insights provided by the analysis, we improve this upper bound by introducing a mixed mutation strategy that incorporates both 2-opt moves and permutation jumps. This strategy improves the upper bound to [Formula: see text]. In the second part of the paper, we use the information gained in the analysis to incorporate domain knowledge to design two fixed-parameter tractable (FPT) evolutionary algorithms for the planar Euclidean TSP. We first develop a [Formula: see text] EA based on an analysis by M. Theile, 2009, "Exact solutions to the traveling salesperson problem by a population-based evolutionary algorithm," Lecture notes in computer science, Vol. 5482 (pp. 145-155), that solves the TSP with k inner points in [Formula: see text] generations with probability [Formula: see text]. We then design a [Formula: see text] EA that incorporates a dynamic programming step into the fitness evaluation. We prove that a variant of this evolutionary algorithm using 2-opt mutation solves the problem after [Formula: see text] steps in expectation with a cost of [Formula: see text] for each fitness evaluation.

  18. EvoOligo: oligonucleotide probe design with multiobjective evolutionary algorithms.

    PubMed

    Shin, Soo-Yong; Lee, In-Hee; Cho, Young-Min; Yang, Kyung-Ae; Zhang, Byoung-Tak

    2009-12-01

    Probe design is one of the most important tasks in successful deoxyribonucleic acid microarray experiments. We propose a multiobjective evolutionary optimization method for oligonucleotide probe design based on the multiobjective nature of the probe design problem. The proposed multiobjective evolutionary approach has several distinguished features, compared with previous methods. First, the evolutionary approach can find better probe sets than existing simple filtering methods with fixed threshold values. Second, the multiobjective approach can easily incorporate the user's custom criteria or change the existing criteria. Third, our approach tries to optimize the combination of probes for the given set of genes, in contrast to other tools that independently search each gene for qualifying probes. Lastly, the multiobjective optimization method provides various sets of probe combinations, among which the user can choose, depending on the target application. The proposed method is implemented as a platform called EvoOligo and is available for service on the web. We test the performance of EvoOligo by designing probe sets for 19 types of Human Papillomavirus and 52 genes in the Arabidopsis Calmodulin multigene family. The design results from EvoOligo are proven to be superior to those from well-known existing probe design tools, such as OligoArray and OligoWiz.

  19. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    NASA Astrophysics Data System (ADS)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  20. DOPGA: a new fitness assignment scheme for multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ufuk Ergul, Engin; Eminoglu, Ilyas

    2014-03-01

    In this article, a new fitness assignment scheme to evaluate the Pareto-optimal solutions for multi-objective evolutionary algorithms is proposed. The proposed DOmination Power of an individual Genetic Algorithm (DOPGA) method can order the individuals in a form in which each individual (the so-called solution) could have a unique rank. With this new method, a multi-objective problem can be treated as if it were a single-objective problem without drastically deviating from the Pareto definition. In DOPGA, relative position of a solution is embedded into the fitness assignment procedures. We compare the performance of the algorithm with two benchmark evolutionary algorithms (Strength Pareto Evolutionary Algorithm (SPEA) and Strength Pareto Evolutionary Algorithm 2 (SPEA2)) on 12 unconstrained bi-objective and one tri-objective test problems. DOPGA significantly outperforms SPEA on all test problems. DOPGA performs better than SPEA2 in terms of convergence metric on all test problems. Also, Pareto-optimal solutions found by DOPGA spread better than SPEA2 on eight of 13 test problems.

  1. A multiobjective evolutionary algorithm to find community structures based on affinity propagation

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng

    2016-07-01

    Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.

  2. An Evolutionary Algorithm to Generate Ellipsoid Detectors for Negative Selection

    DTIC Science & Technology

    2005-03-21

    Von Zuben [21] have both implemented an AIS using the network immune model. Timmis and Neal [91] applied the model to unsupervised machine learning...and de Castro and Von Zuben [21] applied the model to the problem of clustering and filtering unlabelled numerical data sets. Danger theory is young in...algorithm, clonal selection, is described in the next section. 2.6.1 Clonal Selection. De Castro and Von Zuben produced a clonal selection algorithm

  3. A quantum-behaved evolutionary algorithm based on the Bloch spherical search

    NASA Astrophysics Data System (ADS)

    Li, Panchi

    2014-04-01

    In order to enhance the optimization ability of the quantum evolutionary algorithms, a new quantum-behaved evolutionary algorithm is proposed. In this algorithm, the search mechanism is established based on the Bloch sphere. First, the individuals are expressed by qubits described on the Bloch sphere, then the rotation axis is established by Pauli matrixes, and the evolution search is realized by rotating qubits on the Bloch sphere about the rotating axis. In order to avoid premature convergence, the mutation of individuals is achieved by the Hadamard gates. Such rotation can make the current qubit approximate the target qubit along with the great circle on the Bloch sphere, which can accelerate optimization process. Taking the function extreme value optimization as an example, the experimental results show that the proposed algorithm is obviously superior to other similar algorithms.

  4. A kinetic model-based algorithm to classify NGS short reads by their allele origin.

    PubMed

    Marinoni, Andrea; Rizzo, Ettore; Limongelli, Ivan; Gamba, Paolo; Bellazzi, Riccardo

    2015-02-01

    Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin.

  5. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  6. Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.

  7. Bioinspired evolutionary algorithm based for improving network coverage in wireless sensor networks.

    PubMed

    Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie; Chizari, Hassan

    2014-01-01

    Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.

  8. Design of synthetic biological logic circuits based on evolutionary algorithm.

    PubMed

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.

  9. Model based on a quantum algorithm to study the evolution of an epidemics.

    PubMed

    León, A; Pozo, J

    2007-03-01

    A model based on a quantum algorithm is used to study the spread of HIV virus and to predict infection rates on individuals who are not aware of their particular condition. The model makes an analogy between quantum systems and individuals who are infected by the disease. Individuals are represented by two-level quantum systems (quantum "bit"), and the interactions among individuals who cause the infection are represented by unitary transformations. The population is divided into categories according to their behaviour, and the interactions among those individuals in the same category and those in different categories are simulated. The objective is to obtain statistical data on the number of infected individuals depending on the time for every category and for the entire population. Besides, we analyse the impact of the evolution of the disease on individuals who have not knowledge of their specific sanitary condition.

  10. Evolutionary Processes in the Development of Errors in Subtraction Algorithms

    ERIC Educational Resources Information Center

    Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez

    2008-01-01

    The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…

  11. Evolutionary Processes in the Development of Errors in Subtraction Algorithms

    ERIC Educational Resources Information Center

    Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez

    2008-01-01

    The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…

  12. First principles prediction of amorphous phases using evolutionary algorithms.

    PubMed

    Nahas, Suhas; Gaur, Anshu; Bhowmick, Somnath

    2016-07-07

    We discuss the efficacy of evolutionary method for the purpose of structural analysis of amorphous solids. At present, ab initio molecular dynamics (MD) based melt-quench technique is used and this deterministic approach has proven to be successful to study amorphous materials. We show that a stochastic approach motivated by Darwinian evolution can also be used to simulate amorphous structures. Applying this method, in conjunction with density functional theory based electronic, ionic and cell relaxation, we re-investigate two well known amorphous semiconductors, namely silicon and indium gallium zinc oxide. We find that characteristic structural parameters like average bond length and bond angle are within ∼2% of those reported by ab initio MD calculations and experimental studies.

  13. First principles prediction of amorphous phases using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Nahas, Suhas; Gaur, Anshu; Bhowmick, Somnath

    2016-07-01

    We discuss the efficacy of evolutionary method for the purpose of structural analysis of amorphous solids. At present, ab initio molecular dynamics (MD) based melt-quench technique is used and this deterministic approach has proven to be successful to study amorphous materials. We show that a stochastic approach motivated by Darwinian evolution can also be used to simulate amorphous structures. Applying this method, in conjunction with density functional theory based electronic, ionic and cell relaxation, we re-investigate two well known amorphous semiconductors, namely silicon and indium gallium zinc oxide. We find that characteristic structural parameters like average bond length and bond angle are within ˜2% of those reported by ab initio MD calculations and experimental studies.

  14. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  15. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  16. Model-based Layer Estimation using a Hybrid Genetic/Gradient Search Optimization Algorithm

    SciTech Connect

    Chambers, D; Lehman, S; Dowla, F

    2007-05-17

    A particle swarm optimization (PSO) algorithm is combined with a gradient search method in a model-based approach for extracting interface positions in a one-dimensional multilayer structure from acoustic or radar reflections. The basic approach is to predict the reflection measurement using a simulation of one-dimensional wave propagation in a multi-layer, evaluate the error between prediction and measurement, and then update the simulation parameters to minimize the error. Gradient search methods alone fail due to the number of local minima in the error surface close to the desired global minimum. The PSO approach avoids this problem by randomly sampling the region of the error surface around the global minimum, but at the cost of a large number of evaluations of the simulator. The hybrid approach uses the PSO at the beginning to locate the general area around the global minimum then switches to the gradient search method to zero in on it. Examples of the algorithm applied to the detection of interior walls of a building from reflected ultra-wideband radar signals are shown. Other possible applications are optical inspection of coatings and ultrasonic measurement of multilayer structures.

  17. Predicting patchy particle crystals: Variable box shape simulations and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  18. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  19. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  20. UAV Swarm Mission Planning Development Using Evolutionary Algorithms - Part I

    DTIC Science & Technology

    2008-05-01

    All of this must happen while moving through a populated terrain space. The correct control of the vehicles in this space results from exacting...inflicted. Complete destruction of an agent results in a score of 100. Generate Population Initialization uses a bit wise generation of each individual...algorithm uses Elitist for generational selection. With the space as diverse as it is and the population and reproduction operators facilitating high levels

  1. Comparison of Evolutionary Multiobjective Algorithms For Calibrating An Integrated Semi-distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Reed, P.; Wagner, T.

    2005-12-01

    This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.

  2. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case

  3. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  4. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    SciTech Connect

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,

  5. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  6. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  7. Reliable confidence measures for medical diagnosis with evolutionary algorithms.

    PubMed

    Lambrou, Antonis; Papadopoulos, Harris; Gammerman, Alex

    2011-01-01

    Conformal Predictors (CPs) are machine learning algorithms that can provide predictions complemented with valid confidence measures. In medical diagnosis, such measures are highly desirable, as medical experts can gain additional information for each machine diagnosis. A risk assessment in each prediction can play an important role for medical decision making, in which the outcome can be critical for the patients. Several classical machine learning methods can be incorporated into the CP framework. In this paper, we propose a CP that makes use of evolved rule sets generated by a genetic algorithm (GA). The rule-based GA has the advantage of being human readable. We apply our method on two real-world datasets for medical diagnosis, one dataset for breast cancer diagnosis, which contains data gathered from fine needle aspirate of breast mass; and one dataset for ovarian cancer diagnosis, which contains proteomic patterns identified in serum. Our results on both datasets show that the proposed method is as accurate as the classical techniques, while it provides reliable and useful confidence measures.

  8. On source models for (192)Ir HDR brachytherapy dosimetry using model based algorithms.

    PubMed

    Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2016-06-07

    A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic (192)Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over  ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the (192)Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.

  9. On source models for 192Ir HDR brachytherapy dosimetry using model based algorithms

    NASA Astrophysics Data System (ADS)

    Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2016-06-01

    A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic 192Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over  ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the 192Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.

  10. Hybridization of evolutionary algorithms and local search by means of a clustering method.

    PubMed

    Martínez-Estudillo, Alfonso C; Hervás-Martínez, César; Martínez-Estudillo, Francisco J; García-Pedrajas, Nicolás

    2006-06-01

    This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods.

  11. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    PubMed Central

    Guardado, J. L.; Rivas-Davalos, F.; Torres, J.; Maximov, S.; Melgoza, E.

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize. PMID:25401144

  12. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    PubMed

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  13. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Reed, P.; Wagener, T.

    2005-11-01

    This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated model application in the Shale Hills watershed in Pennsylvania. A challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 is an excellent benchmark algorithm for multiobjective hydrologic model calibration. SPEA2 attained competitive to superior results for most of the problems tested in this study. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration.

  14. Evolutionary algorithms with segment-based search for multiobjective optimization problems.

    PubMed

    Li, Miqing; Yang, Shengxiang; Li, Ke; Liu, Xiaohui

    2014-08-01

    This paper proposes a variation operator, called segment-based search (SBS), to improve the performance of evolutionary algorithms on continuous multiobjective optimization problems. SBS divides the search space into many small segments according to the evolutionary information feedback from the set of current optimal solutions. Two operations, micro-jumping and macro-jumping, are implemented upon these segments in order to guide an efficient information exchange among "good" individuals. Moreover, the running of SBS is adaptive according to the current evolutionary status. SBS is activated only when the population evolves slowly, depending on general genetic operators (e.g., mutation and crossover). A comprehensive set of 36 test problems is employed for experimental verification. The influence of two algorithm settings (i.e., the dimensionality and boundary relaxation strategy) and two probability parameters in SBS (i.e., the SBS rate and micro-jumping proportion) are investigated in detail. Moreover, an empirical comparative study with three representative variation operators is carried out. Experimental results show that the incorporation of SBS into the optimization process can improve the performance of evolutionary algorithms for multiobjective optimization problems.

  15. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    SciTech Connect

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A; Polsky, Yarom; Bouman, Charlie; Santos-Villalobos, Hector J

    2017-01-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  16. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  17. Application of an evolutionary algorithm in the optimal design of micro-sensor.

    PubMed

    Lu, Qibing; Wang, Pan; Guo, Sihai; Sheng, Buyun; Liu, Xingxing; Fan, Zhun

    2015-01-01

    This paper introduces an automatic bond graph design method based on genetic programming for the evolutionary design of micro-resonator. First, the system-level behavioral model is discussed, which based on genetic programming and bond graph. Then, the geometry parameters of components are automatically optimized, by using the genetic algorithm with constraints. To illustrate this approach, a typical device micro-resonator is designed as an example in biomedicine. This paper provides a new idea for the automatic optimization design of biomedical sensors by evolutionary calculation.

  18. Runtime analysis of an evolutionary algorithm for stochastic multi-objective combinatorial optimization.

    PubMed

    Gutjahr, Walter J

    2012-01-01

    For stochastic multi-objective combinatorial optimization (SMOCO) problems, the adaptive Pareto sampling (APS) framework has been proposed, which is based on sampling and on the solution of deterministic multi-objective subproblems. We show that when plugging in the well-known simple evolutionary multi-objective optimizer (SEMO) as a subprocedure into APS, ε-dominance has to be used to achieve fast convergence to the Pareto front. Two general theorems are presented indicating how runtime complexity results for APS can be derived from corresponding results for SEMO. This may be a starting point for the runtime analysis of evolutionary SMOCO algorithms.

  19. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  20. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  1. A real negative selection algorithm with evolutionary preference for anomaly detection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Wen; Li, Tao

    2017-04-01

    Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.

  2. Convergence of a discretized self-adaptive evolutionary algorithm on multi-dimensional problems.

    SciTech Connect

    Hart, William Eugene; DeLaurentis, John Morse

    2003-08-01

    We consider the convergence properties of a non-elitist self-adaptive evolutionary strategy (ES) on multi-dimensional problems. In particular, we apply our recent convergence theory for a discretized (1,{lambda})-ES to design a related (1,{lambda})-ES that converges on a class of seperable, unimodal multi-dimensional problems. The distinguishing feature of self-adaptive evolutionary algorithms (EAs) is that the control parameters (like mutation step lengths) are evolved by the evolutionary algorithm. Thus the control parameters are adapted in an implicit manner that relies on the evolutionary dynamics to ensure that more effective control parameters are propagated during the search. Self-adaptation is a central feature of EAs like evolutionary stategies (ES) and evolutionary programming (EP), which are applied to continuous design spaces. Rudolph summarizes theoretical results concerning self-adaptive EAs and notes that the theoretical underpinnings for these methods are essentially unexplored. In particular, convergence theories that ensure convergence to a limit point on continuous spaces have only been developed by Rudolph, Hart, DeLaurentis and Ferguson, and Auger et al. In this paper, we illustrate how our analysis of a (1,{lambda})-ES for one-dimensional unimodal functions can be used to ensure convergence of a related ES on multidimensional functions. This (1,{lambda})-ES randomly selects a search dimension in each iteration, along which points generated. For a general class of separable functions, our analysis shows that the ES searches along each dimension independently, and thus this ES converges to the (global) minimum.

  3. Learning deterministic finite automata with a smart state labeling evolutionary algorithm.

    PubMed

    Lucas, Simon M; Reynolds, T Jeff

    2005-07-01

    Learning a Deterministic Finite Automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the Evidence Driven State Merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also studythe effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions.

  4. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    SciTech Connect

    Werth, D.; O'Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  5. Update-based evolution control: A new fitness approximation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ma, Haiping; Fei, Minrui; Simon, Dan; Mo, Hongwei

    2015-09-01

    Evolutionary algorithms are robust optimization methods that have been used in many engineering applications. However, real-world fitness evaluations can be computationally expensive, so it may be necessary to estimate the fitness with an approximate model. This article reviews design and analysis of computer experiments (DACE) as an approximation method that combines a global polynomial with a local Gaussian model to estimate continuous fitness functions. The article incorporates DACE in various evolutionary algorithms, to test unconstrained and constrained benchmarks, both with and without fitness function evaluation noise. The article also introduces a new evolution control strategy called update-based control that estimates the fitness of certain individuals of each generation based on the exact fitness values of other individuals during that same generation. The results show that update-based evolution control outperforms other strategies on noise-free, noisy, constrained and unconstrained benchmarks. The results also show that update-based evolution control can compensate for fitness evaluation noise.

  6. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.

    PubMed

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  7. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms

    PubMed Central

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n. PMID:26366166

  8. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    PubMed

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  9. XTALOPT Version r10: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Avery, Patrick; Falls, Zackary; Zurek, Eva

    2017-08-01

    A new version of XTALOPT, an evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. XTALOPT is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. The new version incorporates many bug-fixes and new features, as detailed below.

  10. XTALOPT version r9: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Falls, Zackary; Lonie, David C.; Avery, Patrick; Shamp, Andrew; Zurek, Eva

    2016-02-01

    A new version of XTALOPT, an evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. XTALOPT is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. The new version incorporates many bug-fixes and new features, as detailed below.

  11. Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Craig, Sam; While, Lyndon; Barone, Luigi

    We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.

  12. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    PubMed Central

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158

  13. Convergence of evolutionary algorithms on the n-dimensional continuous space.

    PubMed

    Agapie, Alexandru; Agapie, Mircea; Rudolph, Gunter; Zbaganu, Gheorghita

    2013-10-01

    Evolutionary algorithms (EAs) are random optimization methods inspired by genetics and natural selection, resembling simulated annealing. We develop a method that can be used to find a meaningful tradeoff between the difficulty of the analysis and the algorithms' efficiency. Since the case of a discrete search space has been studied extensively, we develop a new stochastic model for the continuous n-dimensional case. Our model uses renewal processes to find global convergence conditions. A second goal of the paper is the analytical estimation of the computation time of EA with uniform mutation inside the (hyper)-sphere of volume 1, minimizing a quadratic function.

  14. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling

    PubMed Central

    Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687

  15. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling.

    PubMed

    Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.

  16. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Reed, P.; Wagener, T.

    2006-05-01

    This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ɛ-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small parameter sets

  17. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  18. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control

    NASA Astrophysics Data System (ADS)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea

    2016-06-01

    Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.

  19. ETEA: a Euclidean minimum spanning tree-based evolutionary algorithm for multi-objective optimization.

    PubMed

    Li, Miqing; Yang, Shengxiang; Zheng, Jinhua; Liu, Xiaohui

    2014-01-01

    The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in space where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based evolutionary algorithm (ETEA) to solve multi-objective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically, in ETEA, four strategies are introduced: (1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; (2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; (3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; (4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.

  20. A Novel Approach to Multiple Sequence Alignment Using Multiobjective Evolutionary Algorithm Based on Decomposition.

    PubMed

    Zhu, Huazheng; He, Zhongshi; Jia, Yuanyuan

    2016-03-01

    Multiple sequence alignment (MSA) is a fundamental and key step for implementing other tasks in bioinformatics, such as phylogenetic analyses, identification of conserved motifs and domains, structure prediction, etc. Despite the fact that there are many methods to implement MSA, biologically perfect alignment approaches are not found hitherto. This paper proposes a novel idea to perform MSA, where MSA is treated as a multiobjective optimization problem. A famous multiobjective evolutionary algorithm framework based on decomposition is applied for solving MSA, named MOMSA. In the MOMSA algorithm, we develop a new population initialization method and a novel mutation operator. We compare the performance of MOMSA with several alignment methods based on evolutionary algorithms, including VDGA, GAPAM, and IMSA, and also with state-of-the-art progressive alignment approaches, such as MSAprobs, Probalign, MAFFT, Procons, Clustal omega, T-Coffee, Kalign2, MUSCLE, FSA, Dialign, PRANK, and CLUSTALW. These alignment algorithms are tested on benchmark datasets BAliBASE 2.0 and BAliBASE 3.0. Experimental results show that MOMSA can obtain the significantly better alignments than VDGA, GAPAM on the most of test cases by statistical analyses, produce better alignments than IMSA in terms of TC scores, and also indicate that MOMSA is comparable with the leading progressive alignment approaches in terms of quality of alignments.

  1. Evolutionary algorithm based offline/online path planner for UAV navigation.

    PubMed

    Nikolos, I K; Valavanis, K P; Tsourveloudis, N C; Kostaras, A N

    2003-01-01

    An evolutionary algorithm based framework, a combination of modified breeder genetic algorithms incorporating characteristics of classic genetic algorithms, is utilized to design an offline/online path planner for unmanned aerial vehicles (UAVs) autonomous navigation. The path planner calculates a curved path line with desired characteristics in a three-dimensional (3-D) rough terrain environment, represented using B-spline curves, with the coordinates of its control points being the evolutionary algorithm artificial chromosome genes. Given a 3-D rough environment and assuming flight envelope restrictions, two problems are solved: i) UAV navigation using an offline planner in a known environment, and, ii) UAV navigation using an online planner in a completely unknown environment. The offline planner produces a single B-Spline curve that connects the starting and target points with a predefined initial direction. The online planner, based on the offline one, is given on-board radar readings which gradually produces a smooth 3-D trajectory aiming at reaching a predetermined target in an unknown environment; the produced trajectory consists of smaller B-spline curves smoothly connected with each other. Both planners have been tested under different scenarios, and they have been proven effective in guiding an UAV to its final destination, providing near-optimal curved paths quickly and efficiently.

  2. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    NASA Astrophysics Data System (ADS)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  3. Cubic time algorithms of amalgamating gene trees and building evolutionary scenarios

    PubMed Central

    2012-01-01

    Background A long recognized problem is the inference of the supertree S that amalgamates a given set {Gj} of trees Gj, with leaves in each Gj being assigned homologous elements. We ground on an approach to find the tree S by minimizing the total cost of mappings αj of individual gene trees Gj into S. Traditionally, this cost is defined basically as a sum of duplications and gaps in each αj. The classical problem is to minimize the total cost, where S runs over the set of all trees that contain an exhaustive non-redundant set of species from all input Gj. Results We suggest a reformulation of the classical NP-hard problem of building a supertree in terms of the global minimization of the same cost functional but only over species trees S that consist of clades belonging to a fixed set P (e.g., an exhaustive set of clades in all Gj). We developed a deterministic solving algorithm with a low degree polynomial (typically cubic) time complexity with respect to the size of input data. We define an extensive set of elementary evolutionary events and suggest an original definition of mapping β of tree G into tree S. We introduce the cost functional c(G, S, f ) and define the mapping β as the global minimum of this functional with respect to the variable f, in which sense it is a generalization of classical mapping α. We suggest a reformulation of the classical NP-hard mapping (reconciliation) problem by introducing time slices into the species tree S and present a cubic time solving algorithm to compute the mapping β. We introduce two novel definitions of the evolutionary scenario based on mapping β or a random process of gene evolution along a species tree. Conclusions Developed algorithms are mathematically proved, which justifies the following statements. The supertree building algorithm finds exactly the global minimum of the total cost if only gene duplications and losses are allowed and the given sets of gene trees satisfies a certain condition. The mapping

  4. Cubic time algorithms of amalgamating gene trees and building evolutionary scenarios.

    PubMed

    Lyubetsky, Vassily A; Rubanov, Lev I; Rusin, Leonid Y; Gorbunov, Konstantin Yu

    2012-12-22

    A long recognized problem is the inference of the supertree S that amalgamates a given set {G(j)} of trees G(j), with leaves in each G(j) being assigned homologous elements. We ground on an approach to find the tree S by minimizing the total cost of mappings α(j) of individual gene trees G(j) into S. Traditionally, this cost is defined basically as a sum of duplications and gaps in each α(j). The classical problem is to minimize the total cost, where S runs over the set of all trees that contain an exhaustive non-redundant set of species from all input G(j). We suggest a reformulation of the classical NP-hard problem of building a supertree in terms of the global minimization of the same cost functional but only over species trees S that consist of clades belonging to a fixed set P (e.g., an exhaustive set of clades in all G(j)). We developed a deterministic solving algorithm with a low degree polynomial (typically cubic) time complexity with respect to the size of input data. We define an extensive set of elementary evolutionary events and suggest an original definition of mapping β of tree G into tree S. We introduce the cost functional c(G, S, f) and define the mapping β as the global minimum of this functional with respect to the variable f, in which sense it is a generalization of classical mapping α. We suggest a reformulation of the classical NP-hard mapping (reconciliation) problem by introducing time slices into the species tree S and present a cubic time solving algorithm to compute the mapping β. We introduce two novel definitions of the evolutionary scenario based on mapping β or a random process of gene evolution along a species tree. Developed algorithms are mathematically proved, which justifies the following statements. The supertree building algorithm finds exactly the global minimum of the total cost if only gene duplications and losses are allowed and the given sets of gene trees satisfies a certain condition. The mapping algorithm finds

  5. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  6. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  7. Combining Evolutionary Algorithms with Clustering toward Rational Global Structure Optimization at the Atomic Scale.

    PubMed

    Jørgensen, Mathias S; Groves, Michael N; Hammer, Bjørk

    2017-03-14

    Predicting structures at the atomic scale is of great importance for understanding the properties of materials. Such predictions are infeasible without efficient global optimization techniques. Many current techniques produce a large amount of idle intermediate data before converging to the global minimum. If this information could be analyzed during optimization, many new possibilities emerge for more rational search algorithms. We combine an evolutionary algorithm (EA) and clustering, a machine-learning technique, to produce a rational algorithm for global structure optimization. Clustering the configuration space of intermediate structures into regions of geometrically similar structures enables the EA to suppress certain regions and favor others. For two test systems, an organic molecule and an oxide surface, the global minimum search proves significantly faster when favoring stable structures in unexplored regions. This clustering-enhanced EA is a step toward adaptive global optimization techniques that can act upon information in accumulated data.

  8. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  9. Introducing Elitist Black-Box Models: When Does Elitist Behavior Weaken the Performance of Evolutionary Algorithms?

    PubMed

    Doerr, Carola; Lengler, Johannes

    2016-10-04

    Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of [Formula: see text]-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most [Formula: see text]. Even for small [Formula: see text], the [Formula: see text]-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).

  10. Exploiting genomic knowledge in optimising molecular breeding programmes: algorithms from evolutionary computing.

    PubMed

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any 'prior knowledge' of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information).

  11. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    PubMed Central

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  12. Incorporation of Hypervolume Approximation with Scalarizing Functions into Indicator-Based Evolutionary Multiobjective Optimization Algorithms

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Noritaka; Sakane, Yuji; Nojima, Yusuke; Ishibuchi, Hisao

    The handling of many-objective problems is a hot issue in the evolutionary multiobjective optimization (EMO) community. Whereas Pareto-based EMO algorithms usually work very well on two-objective problems, they do not work well on many-objective problems. A promising approach to the search for the non-dominated solutions of many-objective problems is a class of indicator-based EMO algorithms. The goal of indicator-based EMO algorithms is to maximize an indicator function which evaluates the quality of a set of solutions. The hypervolume has been frequently used as an indicator function. The main difficulty of the use of hypervolume is that the computation load for its calculation increases exponentially with the number of objectives. Thus the application of indicator-based EMO algorithms to many-objective problems is time-consuming. In our former study, we proposed an idea of approximating the hypervolume using a number of achievement functions with uniformly distributed weight vectors. In this paper, we incorporate our hypervolume approximation into indicator-based EMO algorithms. Experimental results show that the computation time of indicator-based EMO algorithms for many-objective problems is drastically decreased by the use of our hypervolume approximation method with no severe deterioration in its search ability.

  13. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; hide

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  14. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; Yee, Karl

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  15. Genetic Algorithms for Finite Mixture Model Based Voxel Classification in Neuroimaging

    PubMed Central

    Krestyannikov, Evgeny; Dinov, Ivo D.; Graham, Allan MacKenzie; Shattuck, David W.; Ruotsalainen, Ulla; Toga, Arthur W.

    2011-01-01

    Finite mixture models (FMMs) are an indispensable tool for unsupervised classification in brain imaging. Fitting an FMM to the data leads to a complex optimization problem. This optimization problem is difficult to solve by standard local optimization methods, such as the expectation-maximization (EM) algorithm, if a principled initialization is not available. In this paper, we propose a new global optimization algorithm for the FMM parameter estimation problem, which is based on real coded genetic algorithms. Our specific contributions are two-fold: 1) we propose to use blended crossover in order to reduce the premature convergence problem to its minimum and 2) we introduce a completely new permutation operator specifically meant for the FMM parameter estimation. In addition to improving the optimization results, the permutation operator allows for imposing biologically meaningful constraints to the FMM parameter values. We also introduce a hybrid of the genetic algorithm and the EM algorithm for efficient solution of multidimensional FMM fitting problems. We compare our algorithm to the self-annealing EM-algorithm and a standard real coded genetic algorithm with the voxel classification tasks within the brain imaging. The algorithms are tested on synthetic data as well as real three-dimensional image data from human magnetic resonance imaging, positron emission tomography, and mouse brain MRI. The tissue classification results by our method are shown to be consistently more reliable and accurate than with the competing parameter estimation methods. PMID:17518064

  16. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry.

    PubMed

    Makin, Alexis D J; Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M

    2016-03-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation-symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  17. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    PubMed Central

    Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M.

    2016-01-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324

  18. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  19. A multi-objective evolutionary algorithm for protein structure prediction with immune operators.

    PubMed

    Judy, M V; Ravichandran, K S; Murugesan, K

    2009-08-01

    Genetic algorithms (GA) are often well suited for optimisation problems involving several conflicting objectives. It is more suitable to model the protein structure prediction problem as a multi-objective optimisation problem since the potential energy functions used in the literature to evaluate the conformation of a protein are based on the calculations of two different interaction energies: local (bond atoms) and non-local (non-bond atoms) and experiments have shown that those types of interactions are in conflict, by using the potential energy function, Chemistry at Harvard Macromolecular Mechanics. In this paper, we have modified the immune inspired Pareto archived evolutionary strategy (I-PAES) algorithm and denoted it as MI-PAES. It can effectively exploit some prior knowledge about the hydrophobic interactions, which is one of the most important driving forces in protein folding to make vaccines. The proposed MI-PAES is comparable with other evolutionary algorithms proposed in literature, both in terms of best solution found and the computational time and often results in much better search ability than that of the canonical GA.

  20. [In Silico Drug Design Using an Evolutionary Algorithm and Compound Database].

    PubMed

    Kawai, Kentaro; Takahashi, Yoshimasa

    2016-01-01

      Computational drug design plays an important role in the discovery of new drugs. Recently, we proposed an algorithm for designing new drug-like molecules utilizing the structure of a known active molecule. To design molecules, three types of fragments (ring, linker, and side-chain fragments) were defined as building blocks, and a fragment library was prepared from molecules listed in G protein-coupled receptor (GPCR)-SARfari database. An evolutionary algorithm which executes evolutionary operations, such as crossover, mutation, and selection, was implemented to evolve the molecules. As a case study, some GPCRs were selected for computational experiments in which we tried to design ligands from simple seed fragments using the Tanimoto coefficient as a fitness function. The results showed that the algorithm could be used successfully to design new molecules with structural similarity, scaffold variety, and chemical validity. In addition, a docking study revealed that these designed molecules also exhibited shape complementarity with the binding site of the target protein. Therefore, this is expected to become a powerful tool for designing new drug-like molecules in drug discovery projects.

  1. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  2. A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Cao, Ruifen; Li, Guoli; Wu, Yican

    Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.

  3. A multiobjective evolutionary algorithm based on similarity for community detection from signed social networks.

    PubMed

    Liu, Chenlong; Liu, Jing; Jiang, Zhongzhou

    2014-12-01

    Various types of social relationships, such as friends and foes, can be represented as signed social networks (SNs) that contain both positive and negative links. Although many community detection (CD) algorithms have been proposed, most of them were designed primarily for networks containing only positive links. Thus, it is important to design CD algorithms which can handle large-scale SNs. To this purpose, we first extend the original similarity to the signed similarity based on the social balance theory. Then, based on the signed similarity and the natural contradiction between positive and negative links, two objective functions are designed to model the problem of detecting communities in SNs as a multiobjective problem. Afterward, we propose a multiobjective evolutionary algorithm, called MEAs-SN. In MEAs-SN, to overcome the defects of direct and indirect representations for communities, a direct and indirect combined representation is designed. Attributing to this representation, MEAs-SN can switch between different representations during the evolutionary process. As a result, MEAs-SN can benefit from both representations. Moreover, owing to this representation, MEAs-SN can also detect overlapping communities directly. In the experiments, both benchmark problems and large-scale synthetic networks generated by various parameter settings are used to validate the performance of MEAs-SN. The experimental results show the effectiveness and efficacy of MEAs-SN on networks with 1000, 5000, and 10,000 nodes and also in various noisy situations. A thorough comparison is also made between MEAs-SN and three existing algorithms, and the results show that MEAs-SN outperforms other algorithms.

  4. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms.

    PubMed

    Liu, Min-Yin; Huang, Adam; Huang, Norden E

    2017-01-01

    Sleep spindles are brief bursts of brain activity in the sigma frequency range (11-16 Hz) measured by electroencephalography (EEG) mostly during non-rapid eye movement (NREM) stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1) the lack of common benchmark databases, and (2) the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA), the Strength Pareto Evolutionary Algorithm (SPEA2), to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT), and two Hilbert-Huang transform (HHT) based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726-0.737.

  5. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  6. Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm

    SciTech Connect

    Melander, D.J.; Benzley, S.E.; Tautges, T.J.

    1997-06-01

    The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.

  7. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  8. Creating ensembles of oblique decision trees with evolutionary algorithms and sampling

    DOEpatents

    Cantu-Paz, Erick; Kamath, Chandrika

    2006-06-13

    A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.

  9. On Proportions of Fit Individuals in Population of Mutation-Based Evolutionary Algorithm with Tournament Selection.

    PubMed

    Eremeev, Anton V

    2017-05-10

    In this paper, we consider a fitness-level model of a non-elitist mutation-only evolutionary algorithm (EA) with tournament selection. The model provides upper and lower bounds for the expected proportion of the individuals with fitness above given thresholds. In the case of so-called monotone mutation, the obtained bounds imply that increasing the tournament size improves the EA performance. As corollaries, we obtain an exponentially vanishing tail bound for the Randomized Local Search on unimodal functions and polynomial upper bounds on the runtime of EAs on 2-SAT problem and on a family of Set Cover problems proposed by E. Balas.

  10. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  11. Decomposition-based multiobjective evolutionary algorithm for community detection in dynamic social networks.

    PubMed

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.

  12. Application of hybrid evolutionary algorithms to low exhaust emission diesel engine design

    NASA Astrophysics Data System (ADS)

    Jeong, S.; Obayashi, S.; Minemura, Y.

    2008-01-01

    A hybrid evolutionary algorithm, consisting of a genetic algorithm (GA) and particle swarm optimization (PSO), is proposed. Generally, GAs maintain diverse solutions of good quality in multi-objective problems, while PSO shows fast convergence to the optimum solution. By coupling these algorithms, GA will compensate for the low diversity of PSO, while PSO will compensate for the high computational costs of GA. The hybrid algorithm was validated using standard test functions. The results showed that the hybrid algorithm has better performance than either a pure GA or pure PSO. The method was applied to an engineering design problem—the geometry of diesel engine combustion chamber reducing exhaust emissions such as NOx, soot and CO was optimized. The results demonstrated the usefulness of the present method to this engineering design problem. To identify the relation between exhaust emissions and combustion chamber geometry, data mining was performed with a self-organising map (SOM). The results indicate that the volume near the lower central part of the combustion chamber has a large effect on exhaust emissions and the optimum chamber geometry will vary depending on fuel injection angle.

  13. Environment Sensitivity-based Cooperative Co-evolutionary Algorithms for Dynamic Multi-objective Optimization.

    PubMed

    Xu, Biao; Zhang, Yong; Gong, Dunwei; Guo, Yinan; Rong, Miao

    2017-01-16

    Dynamic multi-objective optimization problems (DMOPs) not only involve multiple conflicting objectives, but these objectives may also vary with time, raising a challenge for researchers to solve them. This paper presents a cooperative co-evolutionary strategy based on environment sensitivities for solving DMOPs. In this strategy, a new method that groups decision variables is first proposed, in which all the decision variables are partitioned into two subcomponents according to their interrelation with environment. Adopting two populations to cooperatively optimize the two subcomponents, two prediction methods, i.e., differential prediction and Cauchy mutation, are then employed respectively to speed up their responses on the change of the environment. Furthermore, two improved dynamic multi-objective optimization algorithms, i.e., DNSGAII-CO and DMOPSO-CO, are proposed by incorporating the above strategy into NSGA-II and multi-objective particle swarm optimization, respectively. The proposed algorithms are compared with three state-of-the-art algorithms by applying to seven benchmark DMOPs. Experimental results reveal that the proposed algorithms significantly outperform the compared algorithms in terms of convergence and distribution on most DMOPs.

  14. A Data-Driven Evolutionary Algorithm for Mapping Multibasin Protein Energy Landscapes.

    PubMed

    Clausen, Rudy; Shehu, Amarda

    2015-09-01

    Evidence is emerging that many proteins involved in proteinopathies are dynamic molecules switching between stable and semistable structures to modulate their function. A detailed understanding of the relationship between structure and function in such molecules demands a comprehensive characterization of their conformation space. Currently, only stochastic optimization methods are capable of exploring conformation spaces to obtain sample-based representations of associated energy surfaces. These methods have to address the fundamental but challenging issue of balancing computational resources between exploration (obtaining a broad view of the space) and exploitation (going deep in the energy surface). We propose a novel algorithm that strikes an effective balance by employing concepts from evolutionary computation. The algorithm leverages deposited crystal structures of wildtype and variant sequences of a protein to define a reduced, low-dimensional search space from where to rapidly draw samples. A multiscale technique maps samples to local minima of the all-atom energy surface of a protein under investigation. Several novel algorithmic strategies are employed to avoid premature convergence to particular minima and obtain a broad view of a possibly multibasin energy surface. Analysis of applications on different proteins demonstrates the broad utility of the algorithm to map multibasin energy landscapes and advance modeling of multibasin proteins. In particular, applications on wildtype and variant sequences of proteins involved in proteinopathies demonstrate that the algorithm makes an important first step toward understanding the impact of sequence mutations on misfunction by providing the energy landscape as the intermediate explanatory link between protein sequence and function.

  15. Interlog protein network: an evolutionary benchmark of protein interaction networks for the evaluation of clustering algorithms.

    PubMed

    Jafari, Mohieddin; Mirzaie, Mehdi; Sadeghi, Mehdi

    2015-10-05

    In the field of network science, exploring principal and crucial modules or communities is critical in the deduction of relationships and organization of complex networks. This approach expands an arena, and thus allows further study of biological functions in the field of network biology. As the clustering algorithms that are currently employed in finding modules have innate uncertainties, external and internal validations are necessary. Sequence and network structure alignment, has been used to define the Interlog Protein Network (IPN). This network is an evolutionarily conserved network with communal nodes and less false-positive links. In the current study, the IPN is employed as an evolution-based benchmark in the validation of the module finding methods. The clustering results of five algorithms; Markov Clustering (MCL), Restricted Neighborhood Search Clustering (RNSC), Cartographic Representation (CR), Laplacian Dynamics (LD) and Genetic Algorithm; to find communities in Protein-Protein Interaction networks (GAPPI) are assessed by IPN in four distinct Protein-Protein Interaction Networks (PPINs). The MCL shows a more accurate algorithm based on this evolutionary benchmarking approach. Also, the biological relevance of proteins in the IPN modules generated by MCL is compatible with biological standard databases such as Gene Ontology, KEGG and Reactome. In this study, the IPN shows its potential for validation of clustering algorithms due to its biological logic and straightforward implementation.

  16. a Model-Based Autofocus Algorithm for Ultrasonic Imaging Using a Flexible Array

    NASA Astrophysics Data System (ADS)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Autofocus is a methodology for estimating and correcting errors in the assumed parameters of an imaging algorithm. It provides improved image quality and, therefore, better defect detection and characterization capabilities. In this paper, we present a new autofocus algorithm developed specifically for ultrasonic non-destructive testing and evaluation (NDE). We consider the estimation and correction of errors in the assumed element positions for a flexible ultrasonic array coupled to a specimen with an unknown surface profile. The algorithm performs a weighted least-squares minimization of the time-of-arrival errors in the echo data using assumed models for known features in the specimen. The algorithm is described for point and planar specimen features and demonstrated using experimental data from a flexible array prototype.

  17. Control of modulated vibration using an enhanced adaptive filtering algorithm based on model-based approach

    NASA Astrophysics Data System (ADS)

    Kim, Byeongil; Washington, Gregory N.; Singh, Rajendra

    2012-08-01

    Conventional adaptive filtering algorithms, typically limited to the control of single or multiple sinusoids, are not appropriate to control modulated vibrations, especially in the presence of rich side band structures. To overcome this deficiency, a new control algorithm is proposed that introduces a feedback loop with the model predictive sliding mode control (MPSMC) in the adaptive filtering system. Several amplitude and frequency modulation cases are first computationally studied, and conventional and proposed methods are comparatively evaluated in terms of estimation error, performance in time and frequency domains, stability, and uncertainty in the reference signal. To experimentally validate the proposed algorithm, an active strut (with longitudinal vibrations) is constructed. Overall, the proposed adaptive algorithm yields superior reductions at the main frequencies and at side bands; also, good attenuation is found on a broadband basis.

  18. A novel evolutionary algorithm applied to algebraic modifications of the RANS stress-strain relationship

    NASA Astrophysics Data System (ADS)

    Weatheritt, Jack; Sandberg, Richard

    2016-11-01

    This paper presents a novel and promising approach to turbulence model formulation, rather than putting forward a particular new model. Evolutionary computation has brought symbolic regression of scalar fields into the domain of algorithms and this paper describes a novel expansion of Gene Expression Programming for the purpose of tensor modeling. By utilizing high-fidelity data and uncertainty measures, mathematical models for tensors are created. The philosophy behind the framework is to give freedom to the algorithm to produce a constraint-free model; its own functional form that was not previously imposed. Turbulence modeling is the target application, specifically the improvement of separated flow prediction. Models are created by considering the anisotropy of the turbulent stress tensor and formulating non-linear constitutive stress-strain relationships. A previously unseen flow field is computed and compared to the baseline linear model and an established non-linear model of comparable complexity. The results are highly encouraging.

  19. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    PubMed Central

    Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502

  20. Multi-objective flexible job shop scheduling problem using variable neighborhood evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun; Ji, Zhicheng; Wang, Yan

    2017-07-01

    In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.

  1. Improved evolutionary algorithm for the global optimization of clusters with competing attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Cruz, S. M. A.; Marques, J. M. C.; Pereira, F. B.

    2016-10-01

    We propose improvements to our evolutionary algorithm (EA) [J. M. C. Marques and F. B. Pereira, J. Mol. Liq. 210, 51 (2015)] in order to avoid dissociative solutions in the global optimization of clusters with competing attractive and repulsive interactions. The improved EA outperforms the original version of the method for charged colloidal clusters in the size range 3 ≤ N ≤ 25, which is a very stringent test for global optimization algorithms. While the Bernal spiral is the global minimum for clusters in the interval 13 ≤ N ≤ 18, the lowest-energy structure is a peculiar, so-called beaded-necklace, motif for 19 ≤ N ≤ 25. We have also applied the method for larger sizes and unusual quasi-linear and branched clusters arise as low-energy structures.

  2. Comparison of multiobjective evolutionary algorithms for operations scheduling under machine availability constraints.

    PubMed

    Frutos, M; Méndez, M; Tohmé, F; Broz, D

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.

  3. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  4. Hybrid model based on Genetic Algorithms and SVM applied to variable selection within fruit juice classification.

    PubMed

    Fernandez-Lozano, C; Canto, C; Gestal, M; Andrade-Garda, J M; Rabuñal, J R; Dorado, J; Pazos, A

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected.

  5. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  6. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  7. An improved algorithm for model-based analysis of evoked skin conductance responses☆

    PubMed Central

    Bach, Dominik R.; Friston, Karl J.; Dolan, Raymond J.

    2013-01-01

    Model-based analysis of psychophysiological signals is more robust to noise – compared to standard approaches – and may furnish better predictors of psychological state, given a physiological signal. We have previously established the improved predictive validity of model-based analysis of evoked skin conductance responses to brief stimuli, relative to standard approaches. Here, we consider some technical aspects of the underlying generative model and demonstrate further improvements. Most importantly, harvesting between-subject variability in response shape can improve predictive validity, but only under constraints on plausible response forms. A further improvement is achieved by conditioning the physiological signal with high pass filtering. A general conclusion is that precise modelling of physiological time series does not markedly increase predictive validity; instead, it appears that a more constrained model and optimised data features provide better results, probably through a suppression of physiological fluctuation that is not caused by the experiment. PMID:24063955

  8. Spatial multiobjective optimization of agricultural conservation practices using a SWAT model and an evolutionary algorithm.

    PubMed

    Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine

    2012-12-09

    Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a

  9. Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm

    PubMed Central

    Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine

    2012-01-01

    Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a

  10. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    DTIC Science & Technology

    2012-09-01

    Technology, Atlanta. He earned his B. Tech 9 Annual Conference of the Prognostics and Health Management Society 2012 in 2001 from Indian Institute of... management for model-based prognostics method- ologies based on our experience with Kalman Filters when applied to prognostics for electronics...uncertainty representation, management , and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining

  11. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    NASA Astrophysics Data System (ADS)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  12. A General Framework of Dynamic Constrained Multiobjective Evolutionary Algorithms for Constrained Optimization.

    PubMed

    Zeng, Sanyou; Jiao, Ruwang; Li, Changhe; Li, Xi; Alkasassbeh, Jawdat S

    2017-09-01

    A novel multiobjective technique is proposed for solving constrained optimization problems (COPs) in this paper. The method highlights three different perspectives: 1) a COP is converted into an equivalent dynamic constrained multiobjective optimization problem (DCMOP) with three objectives: a) the original objective; b) a constraint-violation objective; and c) a niche-count objective; 2) a method of gradually reducing the constraint boundary aims to handle the constraint difficulty; and 3) a method of gradually reducing the niche size aims to handle the multimodal difficulty. A general framework of the design of dynamic constrained multiobjective evolutionary algorithms is proposed for solving DCMOPs. Three popular types of multiobjective evolutionary algorithms, i.e., Pareto ranking-based, decomposition-based, and hype-volume indicator-based, are employed to instantiate the framework. The three instantiations are tested on two benchmark suites. Experimental results show that they perform better than or competitive to a set of state-of-the-art constraint optimizers, especially on problems with a large number of dimensions.

  13. AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.

    PubMed

    Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou

    2017-01-01

    In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.

  14. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions.

    PubMed

    Watson, Richard A; Mills, Rob; Buckley, C L; Kouvaris, Kostas; Jackson, Adam; Powers, Simon T; Cox, Chris; Tudge, Simon; Davies, Adam; Kounios, Loizos; Power, Daniel

    2016-01-01

    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term "evolutionary connectionism" to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary

  15. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  16. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    PubMed

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  17. XTALOPT: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Lonie, David C.; Zurek, Eva

    2011-02-01

    The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely

  18. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    NASA Astrophysics Data System (ADS)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  19. [3-D endocardial surface modelling based on the convex hull algorithm].

    PubMed

    Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong

    2006-11-01

    In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK.

  20. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  1. A Parallel EM Algorithm for Model-Based Clustering Applied to the Exploration of Large Spatio-Temporal Data

    SciTech Connect

    Chen, Wei-Chen; Ostrouchov, George; Pugmire, Dave; Prabhat,; Wehner, Michael

    2013-01-01

    We develop a parallel EM algorithm for multivariate Gaussian mixture models and use it to perform model-based clustering of a large climate data set. Three variants of the EM algorithm are reformulated in parallel and a new variant that is faster is presented. All are implemented using the single program, multiple data (SPMD) programming model, which is able to take advantage of the combined collective memory of large distributed computer architectures to process larger data sets. Displays of the estimated mixture model rather than the data allow us to explore multivariate relationships in a way that scales to arbitrary size data. We study the performance of our methodology on simulated data and apply our methodology to a high resolution climate dataset produced by the community atmosphere model (CAM5). This article has supplementary material online.

  2. An improved algorithm for model-based analysis of evoked skin conductance responses.

    PubMed

    Bach, Dominik R; Friston, Karl J; Dolan, Raymond J

    2013-12-01

    Model-based analysis of psychophysiological signals is more robust to noise - compared to standard approaches - and may furnish better predictors of psychological state, given a physiological signal. We have previously established the improved predictive validity of model-based analysis of evoked skin conductance responses to brief stimuli, relative to standard approaches. Here, we consider some technical aspects of the underlying generative model and demonstrate further improvements. Most importantly, harvesting between-subject variability in response shape can improve predictive validity, but only under constraints on plausible response forms. A further improvement is achieved by conditioning the physiological signal with high pass filtering. A general conclusion is that precise modelling of physiological time series does not markedly increase predictive validity; instead, it appears that a more constrained model and optimised data features provide better results, probably through a suppression of physiological fluctuation that is not caused by the experiment. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Comparing state-of-the-art evolutionary multi-objective algorithms for long-term groundwater monitoring design

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.

    2006-06-01

    This study compares the performances of four state-of-the-art evolutionary multi-objective optimization (EMO) algorithms: the Non-Dominated Sorted Genetic Algorithm II (NSGAII), the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II ( ɛ-NSGAII), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm ( ɛMOEA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2), on a four-objective long-term groundwater monitoring (LTM) design test case. The LTM test case objectives include: (i) minimize sampling cost, (ii) minimize contaminant concentration estimation error, (iii) minimize contaminant concentration estimation uncertainty, and (iv) minimize contaminant mass estimation error. The 25-well LTM design problem was enumerated to provide the true Pareto-optimal solution set to facilitate rigorous testing of the EMO algorithms. The performances of the four algorithms are assessed and compared using three runtime performance metrics (convergence, diversity, and ɛ-performance), two unary metrics (the hypervolume indicator and unary ɛ-indicator) and the first-order empirical attainment function. Results of the analyses indicate that the ɛ-NSGAII greatly exceeds the performance of the NSGAII and the ɛMOEA. The ɛ-NSGAII also achieves superior performance relative to the SPEA2 in terms of search effectiveness and efficiency. In addition, the ɛ-NSGAII's simplified parameterization and its ability to adaptively size its population and automatically terminate results in an algorithm which is efficient, reliable, and easy-to-use for water resources applications.

  4. A Data Preprocessing Algorithm for Classification Model Based On Rough Sets

    NASA Astrophysics Data System (ADS)

    Xiang-wei, Li; Yian-fang, Qi

    Aimed to solve the limitation of abundant data to constructing classification modeling in data mining, the paper proposed a novel effective preprocessing algorithm based on rough sets. Firstly, we construct the relation Information System using original data sets. Secondly, make use of attribute reduction theory of Rough sets to produce the Core of Information System. Core is the most important and necessary information which cannot reduce in original Information System. So it can get a same effect as original data sets to data analysis, and can construct classification modeling using it. Thirdly, construct indiscernibility matrix using reduced Information System, and finally, get the classification of original data sets. Compared to existing techniques, the developed algorithm enjoy following advantages: (1) avoiding the abundant data in follow-up data processing, and (2) avoiding large amount of computation in whole data mining process. (3) The results become more effective because of introducing the attributes reducing theory of Rough Sets.

  5. A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings

    PubMed Central

    Chichilnisky, E. J.; Simoncelli, Eero P.

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583

  6. A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.

    PubMed

    Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.

  7. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  8. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  9. Evolutionary Algorithm for RNA Secondary Structure Prediction Based on Simulated SHAPE Data.

    PubMed

    Montaseri, Soheila; Ganjtabesh, Mohammad; Zare-Mirakabad, Fatemeh

    2016-01-01

    Non-coding RNAs perform a wide range of functions inside the living cells that are related to their structures. Several algorithms have been proposed to predict RNA secondary structure based on minimum free energy. Low prediction accuracy of these algorithms indicates that free energy alone is not sufficient to predict the functional secondary structure. Recently, the obtained information from the SHAPE experiment greatly improves the accuracy of RNA secondary structure prediction by adding this information to the thermodynamic free energy as pseudo-free energy. In this paper, a new method is proposed to predict RNA secondary structure based on both free energy and SHAPE pseudo-free energy. For each RNA sequence, a population of secondary structures is constructed and their SHAPE data are simulated. Then, an evolutionary algorithm is used to improve each structure based on both free and pseudo-free energies. Finally, a structure with minimum summation of free and pseudo-free energies is considered as the predicted RNA secondary structure. Computationally simulating the SHAPE data for a given RNA sequence requires its secondary structure. Here, we overcome this limitation by employing a population of secondary structures. This helps us to simulate the SHAPE data for any RNA sequence and consequently improves the accuracy of RNA secondary structure prediction as it is confirmed by our experiments. The source code and web server of our proposed method are freely available at http://mostafa.ut.ac.ir/ESD-Fold/.

  10. An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.

    NASA Astrophysics Data System (ADS)

    Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.

    2016-04-01

    In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .

  11. Geomagnetic Navigation of Autonomous Underwater Vehicle Based on Multi-objective Evolutionary Algorithm.

    PubMed

    Li, Hong; Liu, Mingyong; Zhang, Feihu

    2017-01-01

    This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments.

  12. Combined evolutionary algorithm and minimum classification error training for DHMM-based land mine detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yunxin; Chen, Ping; Gader, Paul D.; Zhang, Yue

    2002-08-01

    Minimum classification error (MCE) training is proposed to improve performance of a discrete hidden Markov model (DHMM) based landmine detection system. The system (baseline) was proposed previously for detection of both metal and nonmetal mines from ground penetrating radar signatures collected by moving vehicles. An initial DHMM model is trained by conventional methods of vector quantization and Baum-Welch algorithm. A sequential generalized probabilistic descent (GPD) algorithm that minimizes an empirical loss function is then used to estimate the landmine/background DHMM parameters, and an evolutionary algorithm based on fitness score of classification accuracy is used to generate and select codebooks. The landmine data of one geographical site was used for model training, and those of two different sites were used for evaluation of system performance. Three scenarios were studied: apply MCE/GPD alone to DHMM estimation, apply EA alone to codebook generation, first apply EA to codebook generation and then apply MCE/GPD to DHMM estimation. Overall, the combined EA and MCE/GPD training led to the best performance. At the same level of detection rate as the baseline DHMM system, the proposed training reduced false alarm rate by a factor of two, indicating significant performance improvement.

  13. Non-Parametric Evolutionary Algorithm for Estimating Root Zone Soil Moisture

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Shin, Y.; Ines, A. M.

    2013-12-01

    Prediction of root zone soil moisture is critical for water resources management. In this study, we explored a non-parametric evolutionary algorithm for estimating root zone soil moisture from a time series of spatially-distributed rainfall across multiple weather locations under two different hydro-climatic regions. A new genetic algorithm-based hidden Markov model (HMMGA) was developed to estimate long-term root zone soil moisture dynamics at different soil depths. Also, we analyzed rainfall occurrence probabilities and dry/wet spell lengths reproduced by this approach. The HMMGA was used to estimate the optimal state sequences (weather states) based on the precipitation history. Historical root zone soil moisture statistics were then determined based on the weather state conditions. To test the new approach, we selected two different soil moisture fields, Oklahoma (130 km x 130 km) and Illinois (300 km x 500 km), during 1995 to 2009 and 1994 to 2010, respectively. We found that the newly developed framework performed well in predicting root zone soil moisture dynamics at both the spatial scales. Also, the reproduced rainfall occurrence probabilities and dry/wet spell lengths matched well with the observations at the spatio-temporal scales. Since the proposed algorithm requires only precipitation and historical soil moisture data from existing, established weather stations, it can serve an attractive alternative for predicting root zone soil moisture in the future using climate change scenarios and root zone soil moisture history.

  14. XtalOpt  version r9: An open-source evolutionary algorithm for crystal structure prediction

    SciTech Connect

    Falls, Zackary; Lonie, David C.; Avery, Patrick; Shamp, Andrew; Zurek, Eva

    2015-10-23

    This is a new version of XtalOpt, an evolutionary algorithm for crystal structure prediction available for download from the CPC library or the XtalOpt website, http://xtalopt.github.io. XtalOpt is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. We have detailed the new version incorporates many bug-fixes and new features here and predict the crystal structure of a system from its stoichiometry alone, using evolutionary algorithms.

  15. Human creativity, evolutionary algorithms, and predictive representations: The mechanics of thought trials.

    PubMed

    Dietrich, Arne; Haider, Hilde

    2015-08-01

    Creative thinking is arguably the pinnacle of cerebral functionality. Like no other mental faculty, it has been omnipotent in transforming human civilizations. Probing the neural basis of this most extraordinary capacity, however, has been doggedly frustrated. Despite a flurry of activity in cognitive neuroscience, recent reviews have shown that there is no coherent picture emerging from the neuroimaging work. Based on this, we take a different route and apply two well established paradigms to the problem. First is the evolutionary framework that, despite being part and parcel of creativity research, has no informed experimental work in cognitive neuroscience. Second is the emerging prediction framework that recognizes predictive representations as an integrating principle of all cognition. We show here how the prediction imperative revealingly synthesizes a host of new insights into the way brains process variation-selection thought trials and present a new neural mechanism for the partial sightedness in human creativity. Our ability to run offline simulations of expected future environments and action outcomes can account for some of the characteristic properties of cultural evolutionary algorithms running in brains, such as degrees of sightedness, the formation of scaffolds to jump over unviable intermediate forms, or how fitness criteria are set for a selection process that is necessarily hypothetical. Prospective processing in the brain also sheds light on how human creating and designing - as opposed to biological creativity - can be accompanied by intentions and foresight. This paper raises questions about the nature of creative thought that, as far as we know, have never been asked before.

  16. Graphical image classification combining an evolutionary algorithm and binary particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Beibei; Wang, Renzhong; Antani, Sameer; Stanley, R. Joe; Thoma, George R.

    2012-01-01

    Biomedical journal articles contain a variety of image types that can be broadly classified into two categories: regular images, and graphical images. Graphical images can be further classified into four classes: diagrams, statistical figures, flow charts, and tables. Automatic figure type identification is an important step toward improved multimodal (text + image) information retrieval and clinical decision support applications. This paper describes a feature-based learning approach to automatically identify these four graphical figure types. We apply Evolutionary Algorithm (EA), Binary Particle Swarm Optimization (BPSO) and a hybrid of EA and BPSO (EABPSO) methods to select an optimal subset of extracted image features that are then classified using a Support Vector Machine (SVM) classifier. Evaluation performed on 1038 figure images extracted from ten BioMedCentral® journals with the features selected by EABPSO yielded classification accuracy as high as 87.5%.

  17. New phases of osmium carbide from evolutionary algorithm and ab initio computations

    NASA Astrophysics Data System (ADS)

    Fadda, Alessandro; Fadda, Giuseppe

    2017-09-01

    New crystal phases of osmium carbide are presented in this work. These results were found with the CA code, an evolutionary algorithm (EA) presented in a previous paper which takes full advantage of crystal symmetry by using an ad hoc search space and genetic operators. The new OsC2 and Os2C structures have a lower enthalpy than any known so far. Moreover, the layered pattern of OsC2 serves as a blueprint for building new crystals by adding or removing layers of carbon and/or osmium and generating many other Os  +  C structures like Os2C, OsC, OsC2 and OsC4. These again have a lower enthalpy than all the investigated structures, including those of the present work. The mechanical, vibrational and electronic properties are discussed as well.

  18. Identifying irregularly shaped crime hot-spots using a multiobjective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolan; Grubesic, Tony H.

    2010-12-01

    Spatial cluster detection techniques are widely used in criminology, geography, epidemiology, and other fields. In particular, spatial scan statistics are popular and efficient techniques for detecting areas of elevated crime or disease events. The majority of spatial scan approaches attempt to delineate geographic zones by evaluating the significance of clusters using likelihood ratio statistics tested with the Poisson distribution. While this can be effective, many scan statistics give preference to circular clusters, diminishing their ability to identify elongated and/or irregular shaped clusters. Although adjusting the shape of the scan window can mitigate some of these problems, both the significance of irregular clusters and their spatial structure must be accounted for in a meaningful way. This paper utilizes a multiobjective evolutionary algorithm to find clusters with maximum significance while quantitatively tracking their geographic structure. Crime data for the city of Cincinnati are utilized to demonstrate the advantages of the new approach and highlight its benefits versus more traditional scan statistics.

  19. Evolutionary algorithm based uniform received power and illumination rendering for indoor visible light communication.

    PubMed

    Ding, Jupeng; Huang, Zhitong; Ji, Yuefeng

    2012-06-01

    In this paper, an evolutionary algorithm based optimization scheme is proposed to realize uniform received power and illumination distribution on the communication floor for fully diffuse indoor visible light communication. Simulation results show that in three distributed lighting configurations, by dynamically modifying the relative optical intensity of transmitters the dynamic range of the received power, referenced against the peak received power, can be reduced to about 40.0% while the uniformity illuminance ratio can be improved up to about 0.70 with the impact to the root mean square delay spread and bandwidth being negligible. Furthermore, the relationship between the field of view of the receivers and the optimization performance is presented as well.

  20. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  1. EXAFS study of hydrogen intercalation into ReO 3 using the evolutionary algorithm.

    PubMed

    Timoshenko, J; Kuzmin, A; Purans, J

    2014-02-05

    In this study we have investigated the influence of hydrogen intercalation on the local atomic structure of rhenium trioxide using a new approach to EXAFS data analysis, based on the evolutionary algorithm (EA). The proposed EA-EXAFS method is an extension of the conventional reverse Monte Carlo approach but is computationally more efficient. It allows one to perform accurate analysis of EXAFS data from distant coordination shells, taking into account both multiple-scattering and disorder (thermal and static) effects. The power of the EA-EXAFS method is first demonstrated on an example of the model system, pure ReO3, and then it is applied to an in situ study of hydrogen bronze HxReO3 upon hydrogen intercalation. The obtained results allow us to detect changes in the lattice dynamics and correlation of atomic motion, and to follow the structural development at different stages of the reaction.

  2. Signal design using nonlinear oscillators and evolutionary algorithms: application to phase-locked loop disruption.

    PubMed

    Olson, C C; Nichols, J M; Michalowicz, J V; Bucholtz, F

    2011-06-01

    This work describes an approach for efficiently shaping the response characteristics of a fixed dynamical system by forcing with a designed input. We obtain improved inputs by using an evolutionary algorithm to search a space of possible waveforms generated by a set of nonlinear, ordinary differential equations (ODEs). Good solutions are those that result in a desired system response subject to some input efficiency constraint, such as signal power. In particular, we seek to find inputs that best disrupt a phase-locked loop (PLL). Three sets of nonlinear ODEs are investigated and found to have different disruption capabilities against a model PLL. These differences are explored and implications for their use as input signal models are discussed. The PLL was chosen here as an archetypal example but the approach has broad applicability to any input∕output system for which a desired input cannot be obtained analytically.

  3. Motif difficulty (MD): a predictive measure of problem difficulty for evolutionary algorithms using network motifs.

    PubMed

    Liu, Jing; Abbass, Hussein A; Green, David G; Zhong, Weicai

    2012-01-01

    One of the major challenges in the field of evolutionary algorithms (EAs) is to characterise which kinds of problems are easy and which are not. Researchers have been attracted to predict the behaviour of EAs in different domains. We introduce fitness landscape networks (FLNs) that are formed using operators satisfying specific conditions and define a new predictive measure that we call motif difficulty (MD) for comparison-based EAs. Because it is impractical to exhaustively search the whole network, we propose a sampling technique for calculating an approximate MD measure. Extensive experiments on binary search spaces are conducted to show both the advantages and limitations of MD. Multidimensional knapsack problems (MKPs) are also used to validate the performance of approximate MD on FLNs with different topologies. The effect of two representations, namely binary and permutation, on the difficulty of MKPs is analysed.

  4. Characterization and classification of adherent cells in monolayer culture using automated tracking and evolutionary algorithms.

    PubMed

    Zhang, Zhen; Bedder, Matthew; Smith, Stephen L; Walker, Dawn; Shabir, Saqib; Southgate, Jennifer

    2016-08-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24h period. Subsequent analysis following feature extraction demonstrated the ability of the technique to successfully separate the modulated classes of cell using evolutionary algorithms. Specifically, a Cartesian Genetic Program (CGP) network was evolved that identified average migration speed, in-contact angular velocity, cohesivity and average cell clump size as the principal features contributing to the separation. Our approach not only provides non-biased and parsimonious insight into modulated class behaviours, but can be extracted as mathematical formulae for the parameterization of computational models. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems.

    PubMed

    Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui

    2010-02-01

    In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.

  6. Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection

    PubMed Central

    Offman, Marc N; Tournier, Alexander L; Bates, Paul A

    2008-01-01

    Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557

  7. Correcting encoder interpolation error on the Green Bank Telescope using an iterative model based identification algorithm

    NASA Astrophysics Data System (ADS)

    Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario

    2015-10-01

    Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.

  8. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  9. Efficient algorithms for protein sequence design and the analysis of certain evolutionary fitness landscapes.

    PubMed

    Kleinberg, J M

    1999-01-01

    Protein sequence design is a natural inverse problem to protein structure prediction: given a target structure in three dimensions, we wish to design an amino acid sequence that is likely fold to it. A model of Sun, Brem, Chan, and Dill casts this problem as an optimization on a space of sequences of hydrophobic (H) and polar (P) monomers; the goal is to find a sequence that achieves a dense hydrophobic core with few solvent-exposed hydrophobic residues. Sun et al. developed a heuristic method to search the space of sequences, without a guarantee of optimality or near-optimality; Hart subsequently raised the computational tractability of constructing an optimal sequence in this model as an open question. Here we resolve this question by providing an efficient algorithm to construct optimal sequences; our algorithm has a polynomial running time, and performs very efficiently in practice. We illustrate the implementation of our method on structures drawn from the Protein Data Bank. We also consider extensions of the model to larger amino acid alphabets, as a way to overcome the limitations of the binary H/P alphabet. We show that for a natural class of arbitrarily large alphabets, it remains possible to design optimal sequences efficiently. Finally, we analyze some of the consequences of this sequence design model for the study of evolutionary fitness landscapes. A given target structure may have many sequences that are optimal in the model of Sun et al.; following a notion raised by the work of J. Maynard Smith, we can ask whether these optimal sequences are "connected" by successive point mutations. We provide a polynomial-time algorithm to decide this connectedness property, relative to a given target structure. We develop the algorithm by first solving an analogous problem expressed in terms of submodular functions, a fundamental object of study in combinatorial optimization.

  10. Geometric model-based fitting algorithm for orientation-selective PELDOR data

    NASA Astrophysics Data System (ADS)

    Abdullin, Dinar; Hagelueken, Gregor; Hunter, Robert I.; Smith, Graham M.; Schiemann, Olav

    2015-03-01

    Pulsed electron-electron double resonance (PELDOR or DEER) spectroscopy is frequently used to determine distances between spin centres in biomacromolecular systems. Experiments where mutual orientations of the spin pair are selectively excited provide the so-called orientation-selective PELDOR data. This data is characterised by the orientation dependence of the modulation depth parameter and of the dipolar frequencies. This dependence has to be taken into account in the data analysis in order to extract distance distributions accurately from the experimental time traces. In this work, a fitting algorithm for such data analysis is discussed. The approach is tested on PELDOR data-sets from the literature and is compared with the previous results.

  11. Nonlinear systems modeling based on self-organizing fuzzy-neural-network with adaptive computation algorithm.

    PubMed

    Han, Honggui; Wu, Xiao-Long; Qiao, Jun-Fei

    2014-04-01

    In this paper, a self-organizing fuzzy-neural-network with adaptive computation algorithm (SOFNN-ACA) is proposed for modeling a class of nonlinear systems. This SOFNN-ACA is constructed online via simultaneous structure and parameter learning processes. In structure learning, a set of fuzzy rules can be self-designed using an information-theoretic methodology. The fuzzy rules with high spiking intensities (SI) are divided into new ones. And the fuzzy rules with a small relative mutual information (RMI) value will be pruned in order to simplify the FNN structure. In parameter learning, the consequent part parameters are learned through the use of an ACA that incorporates an adaptive learning rate strategy into the learning process to accelerate the convergence speed. Then, the convergence of SOFNN-ACA is analyzed. Finally, the proposed SOFNN-ACA is used to model nonlinear systems. The modeling results demonstrate that this proposed SOFNN-ACA can model nonlinear systems effectively.

  12. [Research on optimization of lower limb parameters of cardiopulmonary resuscitation simulation model based on genetic algorithm].

    PubMed

    Xu, Lin

    2014-10-01

    Sudden cardiac arrest is one of the critical clinical syndromes in emergency situations. A cardiopulmonary resuscitation (CPR) is a necessary curing means for those patients with sudden cardiac arrest. In order to simulate effectively the hemodynamic effects of human under AEI-CPR, which is active compression-decompression CPR coupled with enhanced external counter-pulsation and inspiratory impedance threshold valve, and research physiological parameters of each part of lower limbs in more detail, a CPR simulation model established by Babbs was refined. The part of lower limbs was divided into iliac, thigh and calf, which had 15 physiological parameters. Then, these 15 physiological parameters based on genetic algorithm were optimized, and ideal simulation results were obtained finally.

  13. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    PubMed

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  14. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  15. Comparing State-of-the-Art Evolutionary Multi-Objective Algorithms for Long-Term Groundwater Monitoring Design

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Kollat, J. B.

    2005-12-01

    This study demonstrates the effectiveness of a modified version of Deb's Non-Dominated Sorted Genetic Algorithm II (NSGAII), which the authors have named the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (Epsilon-NSGAII), at solving a four objective long-term groundwater monitoring (LTM) design test case. The Epsilon-NSGAII incorporates prior theoretical competent evolutionary algorithm (EA) design concepts and epsilon-dominance archiving to improve the original NSGAII's efficiency, reliability, and ease-of-use. This algorithm eliminates much of the traditional trial-and-error parameterization associated with evolutionary multi-objective optimization (EMO) through epsilon-dominance archiving, dynamic population sizing, and automatic termination. The effectiveness and reliability of the new algorithm is compared to the original NSGAII as well as two other benchmark multi-objective evolutionary algorithms (MOEAs), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (Epsilon-MOEA) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). These MOEAs have been selected because they have been demonstrated to be highly effective at solving numerous multi-objective problems. The results presented in this study indicate superior performance of the Epsilon-NSGAII in terms of the hypervolume indicator, unary Epsilon-indicator, and first-order empirical attainment function metrics. In addition, the runtime metric results indicate that the diversity and convergence dynamics of the Epsilon-NSGAII are competitive to superior relative to the SPEA2, with both algorithms greatly outperforming the NSGAII and Epsilon-MOEA in terms of these metrics. The improvements in performance of the Epsilon-NSGAII over its parent algorithm the NSGAII demonstrate that the application of Epsilon-dominance archiving, dynamic population sizing with archive injection, and automatic termination greatly improve algorithm efficiency and reliability. In addition, the usability of

  16. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  17. Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya

    2013-03-01

    The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.

  18. Application of a multi-objective evolutionary algorithm to the spacecraft stationkeeping problem

    NASA Astrophysics Data System (ADS)

    Myers, Philip L.; Spencer, David B.

    2016-10-01

    Satellite operations are becoming an increasingly private industry, requiring increased profitability. Efficient and safe operation of satellites in orbit will ensure longer lasting and more profitable satellite services. This paper focuses on the use of a multi-objective evolutionary algorithm to schedule the maneuvers of a hypothetical satellite operating at geosynchronous altitude, by seeking to minimize the propellant consumed through the execution of stationkeeping maneuvers and the time the satellite is displaced from its desired orbital plane. Minimization of the time out of place increases the operational availability and minimizing the propellant usage which allows the spacecraft to operate longer. North-South stationkeeping was studied in this paper, through the use of a set of orbit inclination change maneuvers each year. Two cases for the maximum number of maneuvers to be executed were considered, with four and five maneuvers per year. The results delivered by the algorithm provide maneuver schedules which require 40-100 m/s of total Δv for two years of operation, with the satellite maintaining the satellite's orbital plane to within 0.1° between 84% and 96% of the two years being modeled.

  19. Multi-objective control optimization for greenhouse environment using evolutionary algorithms.

    PubMed

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production.

  20. Projector Augmented Wave (PAW) Datasets for Multi-Mbar Simulations: An Evolutionary Algorithm Based Recipe

    NASA Astrophysics Data System (ADS)

    Sarkar, K.; Topsakal, M.; Wentzcovitch, R. M.

    2015-12-01

    We attempt to achieve the accuracy of full-potential linearized augmented-plane-wave (FLAPW) method, as implemented in the WIEN2k code, at the favorable computational efficiency of the projector augmented wave (PAW) method for ab initio calculations of solids. For decades, PAW datasets have been generated by manually choosing its parameters and by visually inspecting its logarithmic derivatives, partial wave, and projector basis set. In addition to being tedious and error-prone, this procedure is inadequate because it is impractical to manually explore the full parameter space, as an infinite number of PAW parameter sets for a given augmentation radius can be generated maintaining all the constraints on logarithmic derivatives and basis sets. Performance verification of all plausible solutions against FLAPW is also impractical. Here we report the development of a hybrid algorithm to construct optimized PAW basis sets that can closely reproduce FLAPW results from zero to ultra-high pressures. The approach applies evolutionary computing (EC) to generate optimum PAW parameter sets using the ATOMPAW code. We have the Quantum ESPRESSO distribution to generate equation of state (EOS) to be compared with WIEN2k EOSs set as target. Softer PAW potentials reproducing yet more closely FLAPW EOSs can be found with this method. We demonstrate its working principles and workability by optimizing PAW basis functions for carbon, magnesium, aluminum, silicon, calcium, and iron atoms. The algorithm requires minimal user intervention in a sense that there is no requirement of visual inspection of logarithmic derivatives or of projector functions.

  1. MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees

    PubMed Central

    2010-01-01

    Background MapReduce is a parallel framework that has been used effectively to design large-scale parallel applications for large computing clusters. In this paper, we evaluate the viability of the MapReduce framework for designing phylogenetic applications. The problem of interest is generating the all-to-all Robinson-Foulds distance matrix, which has many applications for visualizing and clustering large collections of evolutionary trees. We introduce MrsRF (MapReduce Speeds up RF), a multi-core algorithm to generate a t × t Robinson-Foulds distance matrix between t trees using the MapReduce paradigm. Results We studied the performance of our MrsRF algorithm on two large biological trees sets consisting of 20,000 trees of 150 taxa each and 33,306 trees of 567 taxa each. Our experiments show that MrsRF is a scalable approach reaching a speedup of over 18 on 32 total cores. Our results also show that achieving top speedup on a multi-core cluster requires different cluster configurations. Finally, we show how to use an RF matrix to summarize collections of phylogenetic trees visually. Conclusion Our results show that MapReduce is a promising paradigm for developing multi-core phylogenetic applications. The results also demonstrate that different multi-core configurations must be tested in order to obtain optimum performance. We conclude that RF matrices play a critical role in developing techniques to summarize large collections of trees. PMID:20122186

  2. Design Optimization of an Axial Fan Blade Through Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Choi, Jae-Ho; Husain, Afzal; Kim, Kwang-Yong

    2010-06-01

    This paper presents design optimization of an axial fan blade with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by the finite volume approximations and solved on hexahedral grids for the flow analyses. The validation of the numerical results was performed with the experimental data for the axial and tangential velocities. Six design variables related to the blade lean angle and blade profile are selected and the Latin hypercube sampling of design of experiments is used to generate design points within the selected design space. Two objective functions namely total efficiency and torque are employed and the multi-objective optimization is carried out to enhance total efficiency and to reduce the torque. The flow analyses are performed numerically at the designed points to obtain values of the objective functions. The Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ɛ -constraint strategy for local search coupled with surrogate model is used for multi-objective optimization. The Pareto-optimal solutions are presented and trade-off analysis is performed between the two competing objectives in view of the design and flow constraints. It is observed that total efficiency is enhanced and torque is decreased as compared to the reference design by the process of multi-objective optimization. The Pareto-optimal solutions are analyzed to understand the mechanism of the improvement in the total efficiency and reduction in torque.

  3. Combining evolutionary algorithms with oblique decision trees to detect bent double galaxies

    SciTech Connect

    Cantu-Paz, E; Kamath, C

    2000-06-22

    Decision trees have long been popular in classification as they use simple and easy-to-understand tests at each node. Most variants of decision trees test a single attribute at a node, leading to axis-parallel trees, where the test results in a hyperplane which is parallel to one of the dimensions in the attribute space. These trees can be rather large and inaccurate in cases where the concept to be learnt is best approximated by oblique hyperplanes. In such cases, it may be more appropriate to use an oblique decision tree, where the decision at each node is a linear combination of the attributes. Oblique decision trees have not gained wide popularity in part due to the complexity of constructing good oblique splits and the tendency of existing splitting algorithms to get stuck in local minima. Several alternatives have been proposed to handle these problems including randomization in conjunction with deterministic hill climbing and the use of simulated annealing. In this paper, they use evolutionary algorithms (EAs) to determine the split. EAs are well suited for this problem because of their global search properties, their tolerance to noisy fitness evaluations, and their scalability to large dimensional search spaces. They demonstrate the technique on a practical problem from astronomy, namely, the classification of galaxies with a bent-double morphology, and describe their experiences with several split evaluation criteria.

  4. A graph-based evolutionary algorithm: Genetic Network Programming (GNP) and its extension using reinforcement learning.

    PubMed

    Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu

    2007-01-01

    This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, "GNP with Reinforcement Learning (GNPRL)" which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

  5. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  6. Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid

    NASA Astrophysics Data System (ADS)

    Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof

    2013-08-01

    Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.

  7. Assessment of Rainfall-Runoff Simulation Model Based on Satellite Algorithm

    NASA Astrophysics Data System (ADS)

    Nemati, A. R.; Zakeri Niri, M.; Moazami, S.

    2015-12-01

    Simulation of rainfall-runoff process is one of the most important research fields in hydrology and water resources. Generally, the models used in this section are divided into two conceptual and data-driven categories. In this study, a conceptual model and two data-driven models have been used to simulate rainfall-runoff process in Tamer sub-catchment located in Gorganroud watershed in Iran. The conceptual model used is HEC-HMS, and data-driven models are neural network model of multi-layer Perceptron (MLP) and support vector regression (SVR). In addition to simulation of rainfall-runoff process using the recorded land precipitation, the performance of four satellite algorithms of precipitation, that is, CMORPH, PERSIANN, TRMM 3B42 and TRMM 3B42RT were studied. In simulation of rainfall-runoff process, calibration and accuracy of the models were done based on satellite data. The results of the research based on three criteria of correlation coefficient (R), root mean square error (RMSE) and mean absolute error (MAE) showed that in this part the two models of SVR and MLP could perform the simulation of runoff in a relatively appropriate way, but in simulation of the maximum values of the flow, the error of models increased.

  8. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  9. Energy efficient model based algorithm for control of building HVAC systems.

    PubMed

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly.

  10. A simple model based magnet sorting algorithm for planar hybrid undulators

    SciTech Connect

    Rakowsky, G.

    2010-05-23

    Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.

  11. Solving Large-scale Spatial Optimization Problems in Water Resources Management through Spatial Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, J.; Cai, X.

    2007-12-01

    A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators

  12. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  13. An Evolutionary Search Algorithm for Covariate Models in Population Pharmacokinetic Analysis.

    PubMed

    Yamashita, Fumiyoshi; Fujita, Atsuto; Sasa, Yukako; Higuchi, Yuriko; Tsuda, Masahiro; Hashida, Mitsuru

    2017-09-01

    Building a covariate model is a crucial task in population pharmacokinetics. This study develops a novel method for automated covariate modeling based on gene expression programming (GEP), which not only enables covariate selection, but also the construction of nonpolynomial relationships between pharmacokinetic parameters and covariates. To apply GEP to the extended nonlinear least squares analysis, the parameter consolidation and initial parameter value estimation algorithms were further developed and implemented. The entire program was coded in Java. The performance of the developed covariate model was evaluated for the population pharmacokinetic data of tobramycin. In comparison with the established covariate model, goodness-of-fit of the measured data was greatly improved by using only 2 additional adjustable parameters. Ten test runs yielded the same solution. In conclusion, the systematic exploration method is a potentially powerful tool for prescreening covariate models in population pharmacokinetic analysis. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  14. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

    PubMed Central

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002

  15. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.

    PubMed

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.

  16. A review and experimental study on application of classifiers and evolutionary algorithms in EEG based brain-machine interface systems.

    PubMed

    Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam

    2017-07-18

    Considering the importance and the near future development of noninvasive Brain-Machine Interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. The paper is divided into two main parts. In the first part a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and also evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, that is, Sensory Motor Rhythm-BMI (SMR-BMI) and Event Related Potentials-BMI (ERPs-BMI). Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, LDA (Linear Discriminant Analysis) and SVM (Support Vector Machines) with respect to CVA evaluation metric, and NB (Naive Bayes) with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers Bagg-DT (Bagging Decision Tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (Bagging Logistic Regression) and AdaBoost (Adaptive Boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective IWO (Invasive Weed Optimization) and bi-objective NSIWO (Nondominated Sorting IWO) algorithms demonstrated the best performances. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event related potentials) as well as their optimization

  17. A Hybrid Evolutionary Algorithm to Quadratic Three-Dimensional Assignment Problem with Local Search for Many-Core Graphics Processors

    NASA Astrophysics Data System (ADS)

    Lipinski, Piotr

    This paper concerns the quadratic three-dimensional assignment problem (Q3AP), an extension of the quadratic assignment problem (QAP), and proposes an efficient hybrid evolutionary algorithm combining stochastic optimization and local search with a number of crossover operators, a number of mutation operators and an auto-adaptation mechanism. Auto-adaptation manages the pool of evolutionary operators applying different operators in different computation phases to better explore the search space and to avoid premature convergence. Local search additionally optimizes populations of candidate solutions and accelerates evolutionary search. It uses a many-core graphics processor to optimize a number of solutions in parallel, which enables its incorporation into the evolutionary algorithm without excessive increases in the computation time. Experiments performed on benchmark Q3AP instances derived from the classic QAP instances proposed by Nugent et al. confirmed that the proposed algorithm is able to find optimal solutions to Q3AP in a reasonable time and outperforms best known results found in the literature.

  18. Approximation and Parameterized Runtime Analysis of Evolutionary Algorithms for the Maximum Cut Problem.

    PubMed

    Zhou, Yuren; Lai, Xinsheng; Li, Kangshun

    2015-08-01

    The maximum cut (MAX-CUT) problem is to find a bipartition of the vertices in a given graph such that the number of edges with ends in different sets reaches the largest. Though, several experimental investigations have shown that evolutionary algorithms (EAs) are efficient for this NP-complete problem, there is little theoretical work about EAs on the problem. In this paper, we theoretically investigate the performance of EAs on the MAX-CUT problem. We find that both the (1+1) EA and the (1+1) EA*, two simple EAs, efficiently achieve approximation solutions of (m/2)+(1/4)s(G) and (m/2)+(1/2)(√{8m+1}-1), where m and s(G) are respectively the number of edges and the number of odd degree vertices in the input graph. We also reveal that for a given integer k the (1+1) EA* finds a cut of size at least k in expected runtime O(nm+1/δ(4k)) and a cut of size at least (m/2)+k in expected runtime O(n(2)m+1/δ((64/3)k(2))), where δ is a constant mutation probability and n is the number of vertices in the input graph. Finally, we show that the (1+1) EA and the (1+1) EA* are better than some local search algorithms in one instance, and we also show that these two simple EAs may not be efficient in another instance.

  19. Multi-objective evolutionary algorithm for investigating the trade-off between pleiotropy and redundancy

    NASA Astrophysics Data System (ADS)

    Ong, Zhiyang; Lo, Andy Hao-Wei; Berryman, Matthew; Abbott, Derek

    2005-12-01

    The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.

  20. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  1. Constructing Robust Cooperative Networks using a Multi-Objective Evolutionary Algorithm

    PubMed Central

    Wang, Shuai; Liu, Jing

    2017-01-01

    The design and construction of network structures oriented towards different applications has attracted much attention recently. The existing studies indicated that structural heterogeneity plays different roles in promoting cooperation and robustness. Compared with rewiring a predefined network, it is more flexible and practical to construct new networks that satisfy the desired properties. Therefore, in this paper, we study a method for constructing robust cooperative networks where the only constraint is that the number of nodes and links is predefined. We model this network construction problem as a multi-objective optimization problem and propose a multi-objective evolutionary algorithm, named MOEA-Netrc, to generate the desired networks from arbitrary initializations. The performance of MOEA-Netrc is validated on several synthetic and real-world networks. The results show that MOEA-Netrc can construct balanced candidates and is insensitive to the initializations. MOEA-Netrc can find the Pareto fronts for networks with different levels of cooperation and robustness. In addition, further investigation of the robustness of the constructed networks revealed the impact on other aspects of robustness during the construction process. PMID:28134314

  2. New metastable phases in an oxyborate compound obtained by an evolutionary algorithm and Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Vallejo, E.; Avignon, M.

    2017-08-01

    New metastable phases in the Fe homometallic ludwigite compound are obtained and studied using an evolutionary algorithm and Density Functional Theory. Our lowest energy monoclinic structure is identified as P21/m with space group number of 11. This structure evolves towards the monoclinic structure as the result of the spin orbit coupling and a particular zigzag magnetic structure. A zigzag distortion in a class of three-leg ladders follows similar to the experimental one observed below the transition temperature of Tc = 283 K. In this distortion long and short bonds inside rungs alternating in a zigzag way along the ladder legs. Furthermore, a new type of zigzag structural ordering is observed in other two low-energy phases analyzed. In this case, the magnetic ordering behaves qualitatively similar to the experimental structure at 82 K, with antiferromagnetically coupled ferromagnetic rungs. Our calculations show that magnetic symmetry is not favorable for zigzag structural ordering. Finally, structural and magnetic properties will be discussed in comparison with the experimentally known phases.

  3. Constructing Robust Cooperative Networks using a Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Liu, Jing

    2017-01-01

    The design and construction of network structures oriented towards different applications has attracted much attention recently. The existing studies indicated that structural heterogeneity plays different roles in promoting cooperation and robustness. Compared with rewiring a predefined network, it is more flexible and practical to construct new networks that satisfy the desired properties. Therefore, in this paper, we study a method for constructing robust cooperative networks where the only constraint is that the number of nodes and links is predefined. We model this network construction problem as a multi-objective optimization problem and propose a multi-objective evolutionary algorithm, named MOEA-Netrc, to generate the desired networks from arbitrary initializations. The performance of MOEA-Netrc is validated on several synthetic and real-world networks. The results show that MOEA-Netrc can construct balanced candidates and is insensitive to the initializations. MOEA-Netrc can find the Pareto fronts for networks with different levels of cooperation and robustness. In addition, further investigation of the robustness of the constructed networks revealed the impact on other aspects of robustness during the construction process.

  4. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification

    PubMed Central

    Wang, Yubo; Veluvolu, Kalyana C.

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance. PMID:28203141

  5. Complexity reduction in the use of evolutionary algorithms to function optimization: a variable reduction strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Li, Haifeng; Qiu, Dishan; Ma, Manhao; Liu, Jin

    2013-01-01

    Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS.

  6. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  7. Modeling an aquatic ecosystem: application of an evolutionary algorithm with genetic doping to reduce prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Buscema, Massimo

    2016-04-01

    Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.

  8. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  9. A new method for modeling the behavior of finite population evolutionary algorithms.

    PubMed

    Motoki, Tatsuya

    2010-01-01

    As practitioners we are interested in the likelihood of the population containing a copy of the optimum. The dynamic systems approach, however, does not help us to calculate that quantity. Markov chain analysis can be used in principle to calculate the quantity. However, since the associated transition matrices are enormous even for modest problems, it follows that in practice these calculations are usually computationally infeasible. Therefore, some improvements on this situation are desirable. In this paper, we present a method for modeling the behavior of finite population evolutionary algorithms (EAs), and show that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model. We also present some approximate models that use still less memory space than the exact model. Furthermore, based on our models, we examine the selection pressure by fitness-proportionate selection, and observe that on average over all population trajectories, there is no such strong bias toward selecting the higher fitness individuals as the fitness landscape suggests.

  10. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  11. Enhancements to commissioning techniques and quality assurance of brachytherapy treatment planning systems that use model-based dose calculation algorithms.

    PubMed

    Rivard, Mark J; Beaulieu, Luc; Mourtada, Firas

    2010-06-01

    The current standard for brachytherapy dose calculations is based on the AAPM TG-43 formalism. Simplifications used in the TG-43 formalism have been challenged by many publications over the past decade. With the continuous increase in computing power, approaches based on fundamental physics processes or physics models such as the linear-Boltzmann transport equation are now applicable in a clinical setting. Thus, model-based dose calculation algorithms (MBDCAs) have been introduced to address TG-43 limitations for brachytherapy. The MBDCA approach results in a paradigm shift, which will require a concerted effort to integrate them properly into the radiation therapy community. MBDCA will improve treatment planning relative to the implementation of the traditional TG-43 formalism by accounting for individualized, patient-specific radiation scatter conditions, and the radiological effect of material heterogeneities differing from water. A snapshot of the current status of MBDCA and AAPM Task Group reports related to the subject of QA recommendations for brachytherapy treatment planning is presented. Some simplified Monte Carlo simulation results are also presented to delineate the effects MBDCA are called to account for and facilitate the discussion on suggestions for (i) new QA standards to augment current societal recommendations, (ii) consideration of dose specification such as dose to medium in medium, collisional kerma to medium in medium, or collisional kerma to water in medium, and (iii) infrastructure needed to uniformly introduce these new algorithms. Suggestions in this Vision 20/20 article may serve as a basis for developing future standards to be recommended by professional societies such as the AAPM, ESTRO, and ABS toward providing consistent clinical implementation throughout the brachytherapy community and rigorous quality management of MBDCA-based treatment planning systems.

  12. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  13. A two-level hybrid evolutionary algorithm for modeling one-dimensional dynamic systems by higher-order ODE models.

    PubMed

    Cao, H Q; Kang, L S; Guo, T; Chen, Y P; de Garis, H

    2000-01-01

    This paper presents a new algorithm for modeling one-dimensional (1-D) dynamic systems by higher-order ordinary differential equation (HODE) models instead of the ARMA models as used in traditional time series analysis. A two-level hybrid evolutionary modeling algorithm (THEMA) is used to approach the modeling problem of HODE's for dynamic systems. The main idea of this modeling algorithm is to embed a genetic algorithm (GA) into genetic programming (GP), where GP is employed to optimize the structure of a model (the upper level), while a GA is employed to optimize the parameters of the model (the lower level). In the GA, we use a novel crossover operator based on a nonconvex linear combination of multiple parents which works efficiently and quickly in parameter optimization tasks. Two practical examples of time series are used to demonstrate the THEMA's effectiveness and advantages.

  14. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    PubMed

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2016-02-08

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA (global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  15. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  16. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms

    PubMed Central

    Holmes, Tim; Zanker, Johannes M.

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  17. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    PubMed

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  18. Evolutionary algorithms for the optimal management of coastal groundwater: A comparative study toward future challenges

    NASA Astrophysics Data System (ADS)

    Ketabchi, Hamed; Ataie-Ashtiani, Behzad

    2015-01-01

    This paper surveys the literature associated with the application of evolutionary algorithms (EAs) in coastal groundwater management problems (CGMPs). This review demonstrates that previous studies were mostly relied on the application of limited and particular EAs, mainly genetic algorithm (GA) and its variants, to a number of specific problems. The exclusive investigation of these problems is often not the representation of the variety of feasible processes may be occurred in coastal aquifers. In this study, eight EAs are evaluated for CGMPs. The considered EAs are: GA, continuous ant colony optimization (CACO), particle swarm optimization (PSO), differential evolution (DE), artificial bee colony optimization (ABC), harmony search (HS), shuffled complex evolution (SCE), and simplex simulated annealing (SIMPSA). The first application of PSO, ABC, HS, and SCE in CGMPs is reported here. Moreover, the four benchmark problems with different degree of difficulty and variety are considered to address the important issues of groundwater resources in coastal regions. Hence, the wide ranges of popular objective functions and constraints with the number of decision variables ranging from 4 to 15 are included. These benchmark problems are applied in the combined simulation-optimization model to examine the optimization scenarios. Some preliminary experiments are performed to select the most efficient parameters values for EAs to set a fair comparison. The specific capabilities of each EA toward CGMPs in terms of results quality and required computational time are compared. The evaluation of the results highlights EA's applicability in CGMPs, besides the remarkable strengths and weaknesses of them. The comparisons show that SCE, CACO, and PSO yield superior solutions among the EAs according to the quality of solutions whereas ABC presents the poor performance. CACO provides the better solutions (up to 17%) than the worst EA (ABC) for the problem with the highest decision

  19. Model-based x-ray energy spectrum estimation algorithm from CT scanning data with spectrum filter

    NASA Astrophysics Data System (ADS)

    Li, Lei; Wang, Lin-Yuan; Yan, Bin

    2016-10-01

    With the development of technology, the traditional X-ray CT can't meet the modern medical and industry needs for component distinguish and identification. This is due to the inconsistency of X-ray imaging system and reconstruction algorithm. In the current CT systems, X-ray spectrum produced by X-ray source is continuous in energy range determined by tube voltage and energy filter, and the attenuation coefficient of object is varied with the X-ray energy. So the distribution of X-ray energy spectrum plays an important role for beam-hardening correction, dual energy CT image reconstruction or dose calculation. However, due to high ill-condition and ill-posed feature of system equations of transmission measurement data, statistical fluctuations of X ray quantum and noise pollution, it is very hard to get stable and accurate spectrum estimation using existing methods. In this paper, a model-based X-ray energy spectrum estimation method from CT scanning data with energy spectrum filter is proposed. First, transmission measurement data were accurately acquired by CT scan and measurement using phantoms with different energy spectrum filter. Second, a physical meaningful X-ray tube spectrum model was established with weighted gaussian functions and priori information such as continuity of bremsstrahlung and specificity of characteristic emission and estimation information of average attenuation coefficient. The parameter in model was optimized to get the best estimation result for filtered spectrum. Finally, the original energy spectrum was reconstructed from filtered spectrum estimation with filter priori information. Experimental results demonstrate that the stability and accuracy of X ray energy spectrum estimation using the proposed method are improved significantly.

  20. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy.

    PubMed

    Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm

    2013-04-21

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  1. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  2. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    PubMed Central

    Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique. PMID:27635253

  3. Project scheduling: A multi-objective evolutionary algorithm that optimizes the effectiveness of human resources and the project makespan

    NASA Astrophysics Data System (ADS)

    Yannibelli, Virginia; Amandi, Analía

    2013-01-01

    In this article, the project scheduling problem is addressed in order to assist project managers at the early stage of scheduling. Thus, as part of the problem, two priority optimization objectives for managers at that stage are considered. One of these objectives is to assign the most effective set of human resources to each project activity. The effectiveness of a human resource is considered to depend on its work context. The other objective is to minimize the project makespan. To solve the problem, a multi-objective evolutionary algorithm is proposed. This algorithm designs feasible schedules for a given project and evaluates the designed schedules in relation to each objective. The algorithm generates an approximation to the Pareto set as a solution to the problem. The computational experiments carried out on nine different instance sets are reported.

  4. Towards an Extended Evolutionary Game Theory with Survival Analysis and Agreement Algorithms for Modeling Uncertainty, Vulnerability, and Deception

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam)

    Competition, cooperation and communication are the three fundamental relationships upon which natural selection acts in the evolution of life. Evolutionary game theory (EGT) is a 'marriage' between game theory and Darwin's evolution theory; it gains additional modeling power and flexibility by adopting population dynamics theory. In EGT, natural selection acts as optimization agents and produces inherent strategies, which eliminates some essential assumptions in traditional game theory such as rationality and allows more realistic modeling of many problems. Prisoner's Dilemma (PD) and Sir Philip Sidney (SPS) games are two well-known examples of EGT, which are formulated to study cooperation and communication, respectively. Despite its huge success, EGT exposes a certain degree of weakness in dealing with time-, space- and covariate-dependent (i.e., dynamic) uncertainty, vulnerability and deception. In this paper, I propose to extend EGT in two ways to overcome the weakness. First, I introduce survival analysis modeling to describe the lifetime or fitness of game players. This extension allows more flexible and powerful modeling of the dynamic uncertainty and vulnerability (collectively equivalent to the dynamic frailty in survival analysis). Secondly, I introduce agreement algorithms, which can be the Agreement algorithms in distributed computing (e.g., Byzantine Generals Problem [6][8], Dynamic Hybrid Fault Models [12]) or any algorithms that set and enforce the rules for players to determine their consensus. The second extension is particularly useful for modeling dynamic deception (e.g., asymmetric faults in fault tolerance and deception in animal communication). From a computational perspective, the extended evolutionary game theory (EEGT) modeling, when implemented in simulation, is equivalent to an optimization methodology that is similar to evolutionary computing approaches such as Genetic algorithms with dynamic populations [15][17].

  5. Empirical analysis of locality, heritability and heuristic bias in evolutionary algorithms: a case study for the multidimensional knapsack problem.

    PubMed

    Raidl, Günther R; Gottlieb, Jens

    2005-01-01

    Our main aim is to provide guidelines and practical help for the design of appropriate representations and operators for evolutionary algorithms (EAs). For this purpose, we propose techniques to obtain a better understanding of various effects in the interplay of the representation and the operators. We study six different representations and associated variation operators in the context of a steady-state evolutionary algorithm for the multidimensional knapsack problem. Four of them are indirect decoder-based techniques, and two are direct encodings combined with different initialization, repair, and local improvement strategies. The complex decoders and the local improvement and repair strategies make it practically impossible to completely analyze such EAs in a fully theoretical way. After comparing the general performance of the chosen EA variants for the multidimensional knapsack problem on two benchmark suites, we present a hands-on approach for empirically analyzing important aspects of initialization, mutation, and crossover in an isolated fashion. Static, inexpensive measurements based on randomly created solutions are performed in order to quantify and visualize specific properties with respect to heuristic bias, locality, and heritability. These tests shed light onto the complex behavior of such EAs and point out reasons for good or bad performance. In addition, the proposed measures are also examined during actual EA runs, which gives further insight into dynamic aspects of evolutionary search and verifies the validity of the isolated static measurements. All measurements are described in a general way, allowing for an easy adaption to other representations and problems.

  6. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  7. Applications of Evolutionary Algorithms to Electromagnetic Materials Characterization and Design Problems

    NASA Astrophysics Data System (ADS)

    Frasch, Jonathan Lemoine

    Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970's. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex

  8. On the experimental validation of model-based dose calculation algorithms for 192Ir HDR brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Pappas, Eleftherios P.; Zoros, Emmanouil; Moutsatsos, Argyris; Peppa, Vasiliki; Zourari, Kyveli; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2017-05-01

    There is an acknowledged need for the design and implementation of physical phantoms appropriate for the experimental validation of model-based dose calculation algorithms (MBDCA) introduced recently in 192Ir brachytherapy treatment planning systems (TPS), and this work investigates whether it can be met. A PMMA phantom was prepared to accommodate material inhomogeneities (air and Teflon), four plastic brachytherapy catheters, as well as 84 LiF TLD dosimeters (MTS-100M 1  ×  1  ×  1 mm3 microcubes), two radiochromic films (Gafchromic EBT3) and a plastic 3D dosimeter (PRESAGE). An irradiation plan consisting of 53 source dwell positions was prepared on phantom CT images using a commercially available TPS and taking into account the calibration dose range of each detector. Irradiation was performed using an 192Ir high dose rate (HDR) source. Dose to medium in medium, Dmm , was calculated using the MBDCA option of the same TPS as well as Monte Carlo (MC) simulation with the MCNP code and a benchmarked methodology. Measured and calculated dose distributions were spatially registered and compared. The total standard (k  =  1) spatial uncertainties for TLD, film and PRESAGE were: 0.71, 1.58 and 2.55 mm. Corresponding percentage total dosimetric uncertainties were: 5.4-6.4, 2.5-6.4 and 4.85, owing mainly to the absorbed dose sensitivity correction and the relative energy dependence correction (position dependent) for TLD, the film sensitivity calibration (dose dependent) and the dependencies of PRESAGE sensitivity. Results imply a LiF over-response due to a relative intrinsic energy dependence between 192Ir and megavoltage calibration energies, and a dose rate dependence of PRESAGE sensitivity at low dose rates (<1 Gy min-1). Calculations were experimentally validated within uncertainties except for MBDCA results for points in the phantom periphery and dose levels  <20%. Experimental MBDCA validation is laborious, yet feasible. Further

  9. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  10. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  11. Capability of the Maximax&Maximin selection operator in the evolutionary algorithm for a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Ramli, Razamin; Tein, Lim Huai

    2016-08-01

    A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.

  12. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  13. ENPDA: an evolutionary structure-based de novo peptide design algorithm

    NASA Astrophysics Data System (ADS)

    Belda, Ignasi; Madurga, Sergio; Llorà, Xavier; Martinell, Marc; Tarragó, Teresa; Piqueras, Mireia G.; Nicolás, Ernesto; Giralt, Ernest

    2005-08-01

    One of the goals of computational chemists is to automate the de novo design of bioactive molecules. Despite significant advances in computational approaches to ligand design and binding energy evaluation, novel procedures for ligand design are required. Evolutionary computation provides a new approach to this design endeavor. We propose an evolutionary tool for de novo peptide design, based on the evaluation of energies for peptide binding to a user-defined protein surface patch. Special emphasis has been placed on the evaluation of the proposed peptides, leading to two different evaluation heuristics. The software developed was successfully tested on the design of ligands for the proteins prolyl oligopeptidase, p53, and DNA gyrase.

  14. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    NASA Astrophysics Data System (ADS)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  15. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  16. Multi-objective entropy evolutionary algorithm for marine oil spill detection using cosmo-skymed satellite data

    NASA Astrophysics Data System (ADS)

    Marghany, M.

    2015-06-01

    Oil spill pollution has a substantial role in damaging the marine ecosystem. Oil spill that floats on top of water, as well as decreasing the fauna populations, affects the food chain in the ecosystem. In fact, oil spill is reducing the sunlight penetrates the water, limiting the photosynthesis of marine plants and phytoplankton. Moreover, marine mammals for instance, disclosed to oil spills their insulating capacities are reduced, and so making them more vulnerable to temperature variations and much less buoyant in the seawater. This study has demonstrated a design tool for oil spill detection in SAR satellite data using optimization of Entropy based Multi-Objective Evolutionary Algorithm (E-MMGA) which based on Pareto optimal solutions. The study also shows that optimization entropy based Multi-Objective Evolutionary Algorithm provides an accurate pattern of oil slick in SAR data. This shown by 85 % for oil spill, 10 % look-alike and 5 % for sea roughness using the receiver-operational characteristics (ROC) curve. The E-MMGA also shows excellent performance in SAR data. In conclusion, E-MMGA can be used as optimization for entropy to perform an automatic detection of oil spill in SAR satellite data.

  17. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures

    PubMed Central

    Arana-Daniel, Nancy; Gallegos, Alberto A.; López-Franco, Carlos; Alanís, Alma Y.; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels. PMID:27980384

  18. An evolutionary algorithm for global optimization based on self-organizing maps

    NASA Astrophysics Data System (ADS)

    Barmada, Sami; Raugi, Marco; Tucci, Mauro

    2016-10-01

    In this article, a new population-based algorithm for real-parameter global optimization is presented, which is denoted as self-organizing centroids optimization (SOC-opt). The proposed method uses a stochastic approach which is based on the sequential learning paradigm for self-organizing maps (SOMs). A modified version of the SOM is proposed where each cell contains an individual, which performs a search for a locally optimal solution and it is affected by the search for a global optimum. The movement of the individuals in the search space is based on a discrete-time dynamic filter, and various choices of this filter are possible to obtain different dynamics of the centroids. In this way, a general framework is defined where well-known algorithms represent a particular case. The proposed algorithm is validated through a set of problems, which include non-separable problems, and compared with state-of-the-art algorithms for global optimization.

  19. Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.

    PubMed

    Bywater, Robert P

    2016-01-01

    Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before.

  20. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    PubMed Central

    Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong

    2013-01-01

    This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875

  1. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  2. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  3. A model-based parallel origin and orientation refinement algorithm for cryoTEM and its application to the study of virus structures

    PubMed Central

    Ji, Yongchang; Marinescu, Dan C.; Zhang, Wei; Zhang, Xing; Yan, Xiaodong; Baker, Timothy S.

    2014-01-01

    We present a model-based parallel algorithm for origin and orientation refinement for 3D reconstruction in cryoTEM. The algorithm is based upon the Projection Theorem of the Fourier Transform. Rather than projecting the current 3D model and searching for the best match between an experimental view and the calculated projections, the algorithm computes the Discrete Fourier Transform (DFT) of each projection and searches for the central section (“cut”) of the 3D DFT that best matches the DFT of the projection. Factors that affect the efficiency of a parallel program are first reviewed and then the performance and limitations of the proposed algorithm are discussed. The parallel program that implements this algorithm, called PO2R, has been used for the refinement of several virus structures, including those of the 500 Å diameter dengue virus (to 9.5 Å resolution), the 850 Å mammalian reovirus (to better than 7 Å), and the 1800 Å paramecium bursaria chlorella virus (to 15 Å). PMID:16459100

  4. Artificial Neural Networks, and Evolutionary Algorithms as a systems biology approach to a data-base on fetal growth restriction.

    PubMed

    Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo

    2013-12-01

    One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A two-dimensional coupled flow-mass transport model based on an improved unstructured finite volume algorithm.

    PubMed

    Zhou, Jianzhong; Song, Lixiang; Kursan, Suncana; Liu, Yi

    2015-05-01

    A two-dimensional coupled water quality model is developed for modeling the flow-mass transport in shallow water. To simulate shallow flows on complex topography with wetting and drying, an unstructured grid, well-balanced, finite volume algorithm is proposed for numerical resolution of a modified formulation of two-dimensional shallow water equations. The slope-limited linear reconstruction method is used to achieve second-order accuracy in space. The algorithm adopts a HLLC-based integrated solver to compute the flow and mass transport fluxes simultaneously, and uses Hancock's predictor-corrector scheme for efficient time stepping as well as second-order temporal accuracy. The continuity and momentum equations are updated in both wet and dry cells. A new hybrid method, which can preserve the well-balanced property of the algorithm for simulations involving flooding and recession, is proposed for bed slope terms approximation. The effectiveness and robustness of the proposed algorithm are validated by the reasonable good agreement between numerical and reference results of several benchmark test cases. Results show that the proposed coupled flow-mass transport model can simulate complex flows and mass transport in shallow water.

  6. Classifier Model Based on Machine Learning Algorithms: Application to Differential Diagnosis of Suspicious Thyroid Nodules via Sonography.

    PubMed

    Wu, Hongxun; Deng, Zhaohong; Zhang, Bingjie; Liu, Qianyun; Chen, Junyong

    2016-06-24

    The purpose of this article is to construct classifier models using machine learning algorithms and to evaluate their diagnostic performances for differentiating malignant from benign thyroid nodules. This study included 970 histopathologically proven thyroid nodules in 970 patients. Two radiologists retrospectively reviewed ultrasound images, and nodules were graded according to a five-tier sonographic scoring system. Statistically significant variables based on an experienced radiologist's observations were obtained with attribute optimization using fivefold cross-validation and applied as the input nodes to build models for predicting malignancy of nodules. The performances of the machine learning algorithms and radiologists were compared using ROC curve analysis. Diagnosis by the experienced radiologist achieved the highest predictive accuracy of 88.66% with a specificity of 85.33%, whereas the radial basis function (RBF)-neural network (NN) achieved the highest sensitivity of 92.31%. The AUC value for diagnosis by the experienced radiologist (AUC = 0.9135) was greater than those for diagnosis by the less experienced radiologist, the naïve Bayes classifier, the support vector machine, and the RBF-NN (AUC = 0.8492, 0.8811, 0.9033, and 0.9103, respectively; p < 0.05). The machine learning algorithms underperformed with respect to the experienced radiologist's readings used to construct them, and the RBF-NN outperformed the other machine learning algorithm models.

  7. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  8. On the Potential Use of Evolutionary Algorithms for Electro-Optic System Design

    DTIC Science & Technology

    2011-03-25

    optima. This probability is decreased according to a “cooling schedule” over the course of the optimization, which allows the algorithm to eventually...Using Snell’s Law, the angle of the ray relative to the surface normal and ultimately the y-axis can be determined. Using basic trigonometry , we extract

  9. Multi-objective optimization using evolutionary algorithms for qualitative and quantitative control of urban runoff

    NASA Astrophysics Data System (ADS)

    Oraei Zare, S.; Saghafian, B.; Shamsai, A.; Nazif, S.

    2012-01-01

    Urban development and affects the quantity and quality of urban floods. Generally, flood management include planning and management activities to reduce the harmful effects of floods on people, environment and economy is in a region. In recent years, a concept called Best Management Practices (BMPs) has been widely used for urban flood control from both quality and quantity aspects. In this paper, three objective functions relating to the quality of runoff (including BOD5 and TSS parameters), the quantity of runoff (including runoff volume produced at each sub-basin) and expenses (including construction and maintenance costs of BMPs) were employed in the optimization algorithm aimed at finding optimal solution MOPSO and NSGAII optimization methods were coupled with the SWMM urban runoff simulation model. In the proposed structure for NSGAII algorithm, a continuous structure and intermediate crossover was used because they perform better for improving the optimization model efficiency. To compare the performance of the two optimization algorithms, a number of statistical indicators were computed for the last generation of solutions. Comparing the pareto solution resulted from each of the optimization algorithms indicated that the NSGAII solutions was more optimal. Moreover, the standard deviation of solutions in the last generation had no significant differences in comparison with MOPSO.

  10. Comparison of Nine Statistical Model Based Warfarin Pharmacogenetic Dosing Algorithms Using the Racially Diverse International Warfarin Pharmacogenetic Consortium Cohort Database.

    PubMed

    Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao

    2015-01-01

    Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84-8.96 mg/week, mean percentage within 20%: 45.88%-46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly higher mean percentage within 20

  11. iACP-GAEnsC: Evolutionary genetic algorithm based ensemble classification of anticancer peptides by utilizing hybrid feature space.

    PubMed

    Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad

    2017-06-01

    Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. C16S - a Hidden Markov Model based algorithm for taxonomic classification of 16S rRNA gene sequences.

    PubMed

    Ghosh, Tarini Shankar; Gajjalla, Purnachander; Mohammed, Monzoorul Haque; Mande, Sharmila S

    2012-04-01

    Recent advances in high throughput sequencing technologies and concurrent refinements in 16S rDNA isolation techniques have facilitated the rapid extraction and sequencing of 16S rDNA content of microbial communities. The taxonomic affiliation of these 16S rDNA fragments is subsequently obtained using either BLAST-based or word frequency based approaches. However, the classification accuracy of such methods is observed to be limited in typical metagenomic scenarios, wherein a majority of organisms are hitherto unknown. In this study, we present a 16S rDNA classification algorithm, called C16S, that uses genus-specific Hidden Markov Models for taxonomic classification of 16S rDNA sequences. Results obtained using C16S have been compared with the widely used RDP classifier. The performance of C16S algorithm was observed to be consistently higher than the RDP classifier. In some scenarios, this increase in accuracy is as high as 34%. A web-server for the C16S algorithm is available at http://metagenomics.atc.tcs.com/C16S/.

  13. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms.

    PubMed

    Landesberger, Tatiana von; Basgier, Dennis; Becker, Meike

    2016-12-01

    The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.

  14. A COMPARISON OF MODEL BASED AND DIRECT OPTIMIZATION BASED FILTERING ALGORITHMS FOR SHEARWAVE VELOCITY RECONSTRUCTION FOR ELECTRODE VIBRATION ELASTOGRAPHY.

    PubMed

    Ingle, Atul; Varghese, Tomy

    2013-04-01

    Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young's modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps.

  15. Constructing large-scale genetic maps using an evolutionary strategy algorithm.

    PubMed Central

    Mester, D; Ronin, Y; Minkov, D; Nevo, E; Korol, A

    2003-01-01

    This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with approximately 50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology. PMID:14704202

  16. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    PubMed

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Model-based surface soil moisture (SSM) retrieval algorithm using multi-temporal RISAT-1 C-band SAR data

    NASA Astrophysics Data System (ADS)

    Pandey, Dharmendra K.; Maity, Saroj; Bhattacharya, Bimal; Misra, Arundhati

    2016-05-01

    Accurate measurement of surface soil moisture of bare and vegetation covered soil over agricultural field and monitoring the changes in surface soil moisture is vital for estimation for managing and mitigating risk to agricultural crop, which requires information and knowledge to assess risk potential and implement risk reduction strategies and deliver essential responses. The empirical and semi-empirical model-based soil moisture inversion approach developed in the past are either sensor or region specific, vegetation type specific or have limited validity range, and have limited scope to explain physical scattering processes. Hence, there is need for more robust, physical polarimetric radar backscatter model-based retrieval methods, which are sensor and location independent and have wide range of validity over soil properties. In the present study, Integral Equation Model (IEM) and Vector Radiative Transfer (VRT) model were used to simulate averaged backscatter coefficients in various soil moisture (dry, moist and wet soil), soil roughness (smooth to very rough) and crop conditions (low to high vegetation water contents) over selected regions of Gujarat state of India and the results were compared with multi-temporal Radar Imaging Satellite-1 (RISAT-1) C-band Synthetic Aperture Radar (SAR) data in σ°HH and σ°HV polarizations, in sync with on field measured soil and crop conditions. High correlations were observed between RISAT-1 HH and HV with model simulated σ°HH & σ°HV based on field measured soil with the coefficient of determination R2 varying from 0.84 to 0.77 and RMSE varying from 0.94 dB to 2.1 dB for bare soil. Whereas in case of winter wheat crop, coefficient of determination R2 varying from 0.84 to 0.79 and RMSE varying from 0.87 dB to 1.34 dB, corresponding to with vegetation water content values up to 3.4 kg/m2. Artificial Neural Network (ANN) methods were adopted for model-based soil moisture inversion. The training datasets for the NNs were

  18. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms.

    PubMed

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I-Chih; Rasmussen, John C; Sevick-Muraca, Eva M

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  19. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  20. Classification of heavy metal ions present in multi-frequency multi-electrode potable water data using evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Karkra, Rashmi; Kumar, Prashant; Bansod, Baban K. S.; Bagchi, Sudeshna; Sharma, Pooja; Krishna, C. Rama

    2016-12-01

    Access to potable water for the common people is one of the most challenging tasks in the present era. Contamination of drinking water has become a serious problem due to various anthropogenic and geogenic events. The paper demonstrates the application of evolutionary algorithms, viz., particle swan optimization and genetic algorithm to 24 water samples containing eight different heavy metal ions (Cd, Cu, Co, Pb, Zn, Ar, Cr and Ni) for the optimal estimation of electrode and frequency to classify the heavy metal ions. The work has been carried out on multi-variate data, viz., single electrode multi-frequency, single frequency multi-electrode and multi-frequency multi-electrode water samples. The electrodes used are platinum, gold, silver nanoparticles and glassy carbon electrodes. Various hazardous metal ions present in the water samples have been optimally classified and validated by the application of Davis Bouldin index. Such studies are useful in the segregation of hazardous heavy metal ions found in water resources, thereby quantifying the degree of water quality.

  1. Target coverage optimisation of wireless sensor networks using a multi-objective immune co-evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Yong-Sheng; Lu, Xing-Jia; Hao, Kuang-Rong; Li, Long-Fei; Hu, Yi-Fan

    2011-09-01

    Target coverage is an important topic of wireless sensor networks. The target cover can be modelled as a minimal multi-objective vertex cover model with constraint of network connection. In order to search the optimal solution of the target cover set, we propose a multi-objective immune co-evolutionary algorithm (MOICEA) for target coverage. The MOICEA is inspired from the biological mechanisms of immune systems including clonal proliferation, hypermutation, co-evolution, immune elimination and memory mechanism. The affinity between antibody and antigen is used to measure the optimal target cover, and the affinity between antibodies is used to evaluate the diversity of population and to instruct the population evolution process. In order to examine the effectiveness of the MOICEA, we compare its performance with that of integer linear program and genetic algorithm in terms of four objectives while maintaining network connectivity. The experiment results show that the MOICEA can obtain promising performance in efficiently searching optimal vertex set by comparing with other approaches.

  2. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    PubMed Central

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  3. Crossover versus mutation: a comparative analysis of the evolutionary strategy of genetic algorithms applied to combinatorial optimization problems.

    PubMed

    Osaba, E; Carballedo, R; Diaz, F; Onieva, E; de la Iglesia, I; Perallos, A

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.

  4. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  5. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  6. Registration of the Cone Beam CT and Blue-Ray Scanned Dental Model Based on the Improved ICP Algorithm

    PubMed Central

    Li, Zhenhua; Xu, Songsong; Guo, Xiaoyan

    2014-01-01

    Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP) algorithm that focuses on the registration of Cone Beam Computed Tomography (CT) image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point's translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data. PMID:24511309

  7. An ARMA model based motion artifact reduction algorithm in fNIRS data through a Kalman filtering approach

    NASA Astrophysics Data System (ADS)

    Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.

    2014-09-01

    Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.

  8. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  9. Model-based analyses of whole-genome data reveal a complex evolutionary history involving archaic introgression in Central African Pygmies.

    PubMed

    Hsieh, PingHsun; Woerner, August E; Wall, Jeffrey D; Lachance, Joseph; Tishkoff, Sarah A; Gutenkunst, Ryan N; Hammer, Michael F

    2016-03-01

    Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr.

  10. Model-based analyses of whole-genome data reveal a complex evolutionary history involving archaic introgression in Central African Pygmies

    PubMed Central

    Hsieh, PingHsun; Woerner, August E.; Wall, Jeffrey D.; Lachance, Joseph; Tishkoff, Sarah A.; Gutenkunst, Ryan N.; Hammer, Michael F.

    2016-01-01

    Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr. PMID:26888264

  11. Development of a Physical Model-Based Algorithm for the Detection of Single-Nucleotide Substitutions by Using Tiling Microarrays

    PubMed Central

    Ono, Naoaki; Suzuki, Shingo; Furusawa, Chikara; Shimizu, Hiroshi; Yomo, Tetsuya

    2013-01-01

    High-density DNA microarrays are useful tools for analyzing sequence changes in DNA samples. Although microarray analysis provides informative signals from a large number of probes, the analysis and interpretation of these signals have certain inherent limitations, namely, complex dependency of signals on the probe sequences and the existence of false signals arising from non-specific binding between probe and target. In this study, we have developed a novel algorithm to detect the single-base substitutions by using microarray data based on a thermodynamic model of hybridization. We modified the thermodynamic model by introducing a penalty for mismatches that represent the effects of substitutions on hybridization affinity. This penalty results in significantly higher detection accuracy than other methods, indicating that the incorporation of hybridization free energy can improve the analysis of sequence variants by using microarray data. PMID:23382915

  12. P-RnaPredict--a parallel evolutionary algorithm for RNA folding: effects of pseudorandom number quality.

    PubMed

    Wiese, Kay C; Hendriks, Andrew; Deschênes, Alain; Ben Youssef, Belgacem

    2005-09-01

    This paper presents a fully parallel version of RnaPredict, a genetic algorithm (GA) for RNA secondary structure prediction. The research presented here builds on previous work and examines the impact of three different pseudorandom number generators (PRNGs) on the GA's performance. The three generators tested are the C standard library PRNG RAND, a parallelized multiplicative congruential generator (MCG), and a parallelized Mersenne Twister (MT). A fully parallel version of RnaPredict using the Message Passing Interface (MPI) was implemented on a 128-node Beowulf cluster. The PRNG comparison tests were performed with known structures whose sequences are 118, 122, 468, 543, and 556 nucleotides in length. The effects of the PRNGs are investigated and the predicted structures are compared to known structures. Results indicate that P-RnaPredict demonstrated good prediction accuracy, particularly so for shorter sequences.

  13. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  14. An effective evolutionary algorithm for protein folding on 3D FCC HP model by lattice rotation and generalized move sets

    PubMed Central

    2013-01-01

    Background Proteins are essential biological molecules which play vital roles in nearly all biological processes. It is the tertiary structure of a protein that determines its functions. Therefore the prediction of a protein's tertiary structure based on its primary amino acid sequence has long been the most important and challenging subject in biochemistry, molecular biology and biophysics. In the past, the HP lattice model was one of the ab initio methods that many researchers used to forecast the protein structure. Although these kinds of simplified methods could not achieve high resolution, they provided a macrocosm-optimized protein structure. The model has been employed to investigate general principles of protein folding, and plays an important role in the prediction of protein structures. Methods In this paper, we present an improved evolutionary algorithm for the protein folding problem. We study the problem on the 3D FCC lattice HP model which has been widely used in previous research. Our focus is to develop evolutionary algorithms (EA) which are robust, easy to implement and can handle various energy functions. We propose to combine three different local search methods, including lattice rotation for crossover, K-site move for mutation, and generalized pull move; these form our key components to improve previous EA-based approaches. Results We have carried out experiments over several data sets which were used in previous research. The results of the experiments show that our approach is able to find optimal conformations which were not found by previous EA-based approaches. Conclusions We have investigated the geometric properties of the 3D FCC lattice and developed several local search techniques to improve traditional EA-based approaches to the protein folding problem. It is known that EA-based approaches are robust and can handle arbitrary energy functions. Our results further show that by extensive development of local searches, EA can also be very

  15. Model-Based Hookload Monitoring and Prediction at Drilling Rigs using Neural Networks and Forward-Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Arnaout, A.; Fruhwirth, R.; Winter, M.; Esmael, B.; Thonhauser, G.

    2012-04-01

    The use of neural networks and advanced machine learning techniques in the oil & gas industry is a growing trend in the market. Especially in drilling oil & gas wells, prediction and monitoring different drilling parameters is an essential task to prevent serious problems like "Kick", "Lost Circulation" or "Stuck Pipe" among others. The hookload represents the weight load of the drill string at the crane hook. It is one of the most important parameters. During drilling the parameter "Weight on Bit" is controlled by the driller whereby the hookload is the only measure to monitor how much weight on bit is applied to the bit to generate the hole. Any changes in weight on bit will be directly reflected at the hookload. Furthermore any unwanted contact between the drill string and the wellbore - potentially leading to stuck pipe problem - will appear directly in the measurements of the hookload. Therefore comparison of the measured to the predicted hookload will not only give a clear idea on what is happening down-hole, it also enables the prediction of a number of important events that may cause problems in the borehole and yield in some - fortunately rare - cases in catastrophes like blow-outs. Heuristic models using highly sophisticated neural networks were designed for the hookload prediction; the training data sets were prepared in cooperation with drilling experts. Sensor measurements as well as a set of derived feature channels were used as input to the models. The contents of the final data set can be separated into (1) features based on rig operation states, (2) real-time sensors features and (3) features based on physics. A combination of novel neural network architecture - the Completely Connected Perceptron and parallel learning techniques which avoid trapping into local error minima - was used for building the models. In addition automatic network growing algorithms and highly sophisticated stopping criterions offer robust and efficient estimation of the

  16. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    SciTech Connect

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  17. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry.

    PubMed

    Beyer, Hans-Georg

    2014-01-01

    The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.

  18. Evolutionary Estimation of Diffusion Models in Multi-Generation NAND Flash Memory Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Chiang, Su-Yun; Liou, Kuen-Ying

    2007-11-01

    In this paper, using the realistic market data of NAND flash memory in the past decade, we provide theoretical arguments and empirical evidence for how genetic algorithms (GA) can be used for efficient estimation of macro-level diffusion models. Under the comparison of two methods, we perform two parts; one is the estimation of growth of single generation of NAND flash memory and the other is the estimation of growth of multi-generation NAND flash memory. In the first part, we find the estimated ability of GA is as good as nonlinear least square (NLS), but GA can overcome initial guess problems that NLS couldn't. In the second part, we find the result of estimation by NLS method cannot converge. Thus, GA is superior to NLS when we perform the estimation of multi-generation flash memory. According to the preliminary results above, we could conclude that GA is suited for estimation of diffusion model than that of the NLS; in particular, for multi-generation NAND flash memory.

  19. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  20. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  1. A Two-Phase Multiobjective Evolutionary Algorithm for Enhancing the Robustness of Scale-Free Networks Against Multiple Malicious Attacks.

    PubMed

    Zhou, Mingxing; Liu, Jing

    2017-02-01

    Designing robust networks has attracted increasing attentions in recent years. Most existing work focuses on improving the robustness of networks against a specific type of attacks. However, networks which are robust against one type of attacks may not be robust against another type of attacks. In the real-world situations, different types of attacks may happen simultaneously. Therefore, we use the Pearson's correlation coefficient to analyze the correlation between different types of attacks, model the robustness measures against different types of attacks which are negatively correlated as objectives, and model the problem of optimizing the robustness of networks against multiple malicious attacks as a multiobjective optimization problem. Furthermore, to effectively solve this problem, we propose a two-phase multiobjective evolutionary algorithm, labeled as MOEA-RSFMMA. In MOEA-RSFMMA, a single-objective sampling phase is first used to generate a good initial population for the later two-objective optimization phase. Such a two-phase optimizing pattern well balances the computational cost of the two objectives and improves the search efficiency. In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of MOEA-RSFMMA. Moreover, both local and global characteristics of networks in different parts of the obtained Pareto fronts are studied. The results show that the networks in different parts of Pareto fronts reflect different properties, and provide various choices for decision makers.

  2. A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems.

    PubMed

    Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin

    2009-10-01

    In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.

  3. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Annual Energy Production (AEP) optimization for tidal power plants based on Evolutionary Algorithms - Swansea Bay Tidal Power Plant AEP optimization

    NASA Astrophysics Data System (ADS)

    Kontoleontos, E.; Weissenberger, S.

    2016-11-01

    In order to be able to predict the maximum Annual Energy Production (AEP) for tidal power plants, an advanced AEP optimization procedure is required for solving the optimization problem which consists of a high number of design variables and constraints. This efficient AEP optimization procedure requires an advanced optimization tool (EASY software) and an AEP calculation tool that can simulate all different operating modes of the units (bidirectional turbine, pump and sluicing mode). The EASY optimization software is a metamodel-assisted Evolutionary Algorithm (MAEA) that can be used in both single- and multi-objective optimization problems. The AEP calculation tool, developed by ANDRITZ HYDRO, in combination with EASY is used to maximize the tidal annual energy produced by optimizing the plant operation throughout the year. For the Swansea Bay Tidal Power Plant project, the AEP optimization along with the hydraulic design optimization and the model testing was used to evaluate all different hydraulic and operating concepts and define the optimal concept that led to a significant increase of the AEP value. This new concept of a triple regulated “bi-directional bulb pump turbine” for Swansea Bay Tidal Power Plant (16 units, nominal power above 320 MW) along with its AEP optimization scheme will be presented in detail in the paper. Furthermore, the use of an online AEP optimization during operation of the power plant, that will provide the optimal operating points to the control system, will be also presented.

  5. EASY-GOING deconvolution: combining accurate simulation and evolutionary algorithms for fast deconvolution of solid-state quadrupolar NMR spectra.

    PubMed

    Grimminck, Dennis L A G; Polman, Ben J W; Kentgens, Arno P M; Meerts, W Leo

    2011-08-01

    A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution (EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. A model based iterative reconstruction algorithm for high angle annular dark field-scanning transmission electron microscope (HAADF-STEM) tomography.

    PubMed

    Venkatakrishnan, S V; Drummy, Lawrence F; Jackson, Michael A; De Graef, Marc; Simmons, Jeff; Bouman, Charles A

    2013-11-01

    High angle annular dark field (HAADF)-scanning transmission electron microscope (STEM) data is increasingly being used in the physical sciences to research materials in 3D because it reduces the effects of Bragg diffraction seen in bright field TEM data. Typically, tomographic reconstructions are performed by directly applying either filtered back projection (FBP) or the simultaneous iterative reconstruction technique (SIRT) to the data. Since HAADF-STEM tomography is a limited angle tomography modality with low signal to noise ratio, these methods can result in significant artifacts in the reconstructed volume. In this paper, we develop a model based iterative reconstruction algorithm for HAADF-STEM tomography. We combine a model for image formation in HAADF-STEM tomography along with a prior model to formulate the tomographic reconstruction as a maximum a posteriori probability (MAP) estimation problem. Our formulation also accounts for certain missing measurements by treating them as nuisance parameters in the MAP estimation framework. We adapt the iterative coordinate descent algorithm to develop an efficient method to minimize the corresponding MAP cost function. Reconstructions of simulated as well as experimental data sets show results that are superior to FBP and SIRT reconstructions, significantly suppressing artifacts and enhancing contrast.

  7. Detection of pulmonary embolism on computed tomography: improvement using a model-based iterative reconstruction algorithm compared with filtered back projection and iterative reconstruction algorithms.

    PubMed

    Kligerman, Seth; Lahiji, Kian; Weihe, Elizabeth; Lin, Cheng Tin; Terpenning, Silanath; Jeudy, Jean; Frazier, Annie; Pugatch, Robert; Galvin, Jeffrey R; Mittal, Deepika; Kothari, Kunal; White, Charles S

    2015-01-01

    The purpose of the study was to determine whether a model-based iterative reconstruction (MBIR) technique improves diagnostic confidence and detection of pulmonary embolism (PE) compared with hybrid iterative reconstruction (HIR) and filtered back projection (FBP) reconstructions in patients undergoing computed tomography pulmonary angiography. The study was approved by our institutional review board. Fifty patients underwent computed tomography pulmonary angiography at 100 kV using standard departmental protocols. Twenty-two of 50 patients had studies positive for PE. All 50 studies were reconstructed using FBP, HIR, and MBIR. After image randomization, 5 thoracic radiologists and 2 thoracic radiology fellows graded each study on a scale of 1 (very poor) to 5 (ideal) in 4 subjective categories: diagnostic confidence, noise, pulmonary artery enhancement, and plastic appearance. Readers assessed each study for the presence of PE. Parametric and nonparametric data were analyzed with repeated measures and Friedman analysis of variance, respectively. For the 154 positive studies (7 readers × 22 positive studies), pooled sensitivity for detection of PE was 76% (117/154), 78.6% (121/154), and 82.5% (127/154) using FBP, HIR, and MBIR, respectively. PE detection was significantly higher using MBIR compared with FBP (P = 0.016) and HIR (P = 0.046). Because of nonsignificant increase in FP studies using HIR and MBIR, accuracy with MBIR (88.6%), HIR (87.1%), and FBP (87.7%) was similar. Compared with FBP, MBIR led to a significant subjective increase in diagnostic confidence, noise, and enhancement in 6/7, 6/7, and 7/7 readers, respectively. Compared with HIR, MBIR led to significant subjective increase in diagnostic confidence, noise, and enhancement in 5/7, 5/7, and 7/7 readers, respectively. MBIR led to a subjective increase in plastic appearance in all 7 readers compared with both FBP and HIR. MBIR led to significant increase in PE detection compared with FBP and HIR

  8. Analyzing the evolutionary mechanisms of the Air Transportation System-of-Systems using network theory and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Kotegawa, Tatsuya

    Complexity in the Air Transportation System (ATS) arises from the intermingling of many independent physical resources, operational paradigms, and stakeholder interests, as well as the dynamic variation of these interactions over time. Currently, trade-offs and cost benefit analyses of new ATS concepts are carried out on system-wide evaluation simulations driven by air traffic forecasts that assume fixed airline routes. However, this does not well reflect reality as airlines regularly add and remove routes. A airline service route network evolution model that projects route addition and removal was created and combined with state-of-the-art air traffic forecast methods to better reflect the dynamic properties of the ATS in system-wide simulations. Guided by a system-of-systems framework, network theory metrics and machine learning algorithms were applied to develop the route network evolution models based on patterns extracted from historical data. Constructing the route addition section of the model posed the greatest challenge due to the large pool of new link candidates compared to the actual number of routes historically added to the network. Of the models explored, algorithms based on logistic regression, random forests, and support vector machines showed best route addition and removal forecast accuracies at approximately 20% and 40%, respectively, when validated with historical data. The combination of network evolution models and a system-wide evaluation tool quantified the impact of airline route network evolution on air traffic delay. The expected delay minutes when considering network evolution increased approximately 5% for a forecasted schedule on 3/19/2020. Performance trade-off studies between several airline route network topologies from the perspectives of passenger travel efficiency, fuel burn, and robustness were also conducted to provide bounds that could serve as targets for ATS transformation efforts. The series of analysis revealed that high

  9. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  10. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions

    NASA Astrophysics Data System (ADS)

    Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.

    2016-10-01

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  11. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  12. Radiation dose reduction in abdominal computed tomography during the late hepatic arterial phase using a model-based iterative reconstruction algorithm: how low can we go?

    PubMed

    Husarik, Daniela B; Marin, Daniele; Samei, Ehsan; Richard, Samuel; Chen, Baiyu; Jaffe, Tracy A; Bashir, Mustafa R; Nelson, Rendon C

    2012-08-01

    The aim of this study was to compare the image quality of abdominal computed tomography scans in an anthropomorphic phantom acquired at different radiation dose levels where each raw data set is reconstructed with both a standard convolution filtered back projection (FBP) and a full model-based iterative reconstruction (MBIR) algorithm. An anthropomorphic phantom in 3 sizes was used with a custom-built liver insert simulating late hepatic arterial enhancement and containing hypervascular liver lesions of various sizes. Imaging was performed on a 64-section multidetector-row computed tomography scanner (Discovery CT750 HD; GE Healthcare, Waukesha, WI) at 3 different tube voltages for each patient size and 5 incrementally decreasing tube current-time products for each tube voltage. Quantitative analysis consisted of contrast-to-noise ratio calculations and image noise assessment. Qualitative image analysis was performed by 3 independent radiologists rating subjective image quality and lesion conspicuity. Contrast-to-noise ratio was significantly higher and mean image noise was significantly lower on MBIR images than on FBP images in all patient sizes, at all tube voltage settings, and all radiation dose levels (P < 0.05). Overall image quality and lesion conspicuity were rated higher for MBIR images compared with FBP images at all radiation dose levels. Image quality and lesion conspicuity on 25% to 50% dose MBIR images were rated equal to full-dose FBP images. This phantom study suggests that depending on patient size, clinically acceptable image quality of the liver in the late hepatic arterial phase can be achieved with MBIR at approximately 50% lower radiation dose compared with FBP.

  13. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  14. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  15. Larger water clusters with edges and corners on their way to ice: structural trends elucidated with an improved parallel evolutionary algorithm.

    PubMed

    Bandow, Bernhard; Hartke, Bernd

    2006-05-04

    For the difficult task of finding global minimum energy structures for molecular clusters of nontrivial size, we present a highly efficient parallel implementation of an evolutionary algorithm. By completely abandoning the traditional concept of generations and by replacing it with a less rigid pool concept, we have managed to eliminate serial bottlenecks completely and can operate the algorithm efficiently on an arbitrary number of parallel processes. Nevertheless, our new algorithm still realizes all of the main features of our old, successful implementation. First tests of the new algorithm are shown for the highly demanding problem of water clusters modeled by a potential with flexible, polarizable monomers (TTM2-F). For this problem, our new algorithm not only reproduces all of the global minima proposed previously in considerably less CPU time but also leads to improved proposals in several cases. These, in turn, qualitatively change our earlier predictions concerning the transitions from all-surface structures to cages with a single interior molecule, and from one to two interior molecules. Furthermore, we compare preliminary results up to n = 105 with locally optimized cuts from several ice modifications. This comparison indicates that relaxed ice structures may start to be competitive already at cluster sizes above n = 90.

  16. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  17. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  18. MOEPGA: A novel method to detect protein complexes in yeast protein-protein interaction networks based on MultiObjective Evolutionary Programming Genetic Algorithm.

    PubMed

    Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan

    2015-10-01

    The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Evolutionary Computing

    SciTech Connect

    Patton, Robert M; Cui, Xiaohui; Jiao, Yu; Potok, Thomas E

    2008-01-01

    The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter focuses specifically on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.

  20. Algorithms for computing parsimonious evolutionary scenarios for genome evolution, the last universal common ancestor and dominance of horizontal gene transfer in the evolution of prokaryotes.

    PubMed

    Mirkin, Boris G; Fenner, Trevor I; Galperin, Michael Y; Koonin, Eugene V

    2003-01-06

    Comparative analysis of sequenced genomes reveals numerous instances of apparent horizontal gene transfer (HGT), at least in prokaryotes, and indicates that lineage-specific gene loss might have been even more common in evolution. This complicates the notion of a species tree, which needs to be re-interpreted as a prevailing evolutionary trend, rather than the full depiction of evolution, and makes reconstruction of ancestral genomes a non-trivial task. We addressed the problem of constructing parsimonious scenarios for individual sets of orthologous genes given a species tree. The orthologous sets were taken from the database of Clusters of Orthologous Groups of proteins (COGs). We show that the phyletic patterns (patterns of presence-absence in completely sequenced genomes) of almost 90% of the COGs are inconsistent with the hypothetical species tree. Algorithms were developed to reconcile the phyletic patterns with the species tree by postulating gene loss, COG emergence and HGT (the latter two classes of events were collectively treated as gene gains). We prove that each of these algorithms produces a parsimonious evolutionary scenario, which can be represented as mapping of loss and gain events on the species tree. The distribution of the evolutionary events among the tree nodes substantially depends on the underlying assumptions of the reconciliation algorithm, e.g. whether or not independent gene gains (gain after loss after gain) are permitted. Biological considerations suggest that, on average, gene loss might be a more likely event than gene gain. Therefore different gain penalties were used and the resulting series of reconstructed gene sets for the last universal common ancestor (LUCA) of the extant life forms were analysed. The number of genes in the reconstructed LUCA gene sets grows as the gain penalty increases. However, qualitative examination of the LUCA versions reconstructed with different gain penalties indicates that, even with a gain penalty

  1. Algorithms for computing parsimonious evolutionary scenarios for genome evolution, the last universal common ancestor and dominance of horizontal gene transfer in the evolution of prokaryotes

    PubMed Central

    Mirkin, Boris G; Fenner, Trevor I; Galperin, Michael Y; Koonin, Eugene V

    2003-01-01

    Background Comparative analysis of sequenced genomes reveals numerous instances of apparent horizontal gene transfer (HGT), at least in prokaryotes, and indicates that lineage-specific gene loss might have been even more common in evolution. This complicates the notion of a species tree, which needs to be re-interpreted as a prevailing evolutionary trend, rather than the full depiction of evolution, and makes reconstruction of ancestral genomes a non-trivial task. Results We addressed the problem of constructing parsimonious scenarios for individual sets of orthologous genes given a species tree. The orthologous sets were taken from the database of Clusters of Orthologous Groups of proteins (COGs). We show that the phyletic patterns (patterns of presence-absence in completely sequenced genomes) of almost 90% of the COGs are inconsistent with the hypothetical species tree. Algorithms were developed to reconcile the phyletic patterns with the species tree by postulating gene loss, COG emergence and HGT (the latter two classes of events were collectively treated as gene gains). We prove that each of these algorithms produces a parsimonious evolutionary scenario, which can be represented as mapping of loss and gain events on the species tree. The distribution of the evolutionary events among the tree nodes substantially depends on the underlying assumptions of the reconciliation algorithm, e.g. whether or not independent gene gains (gain after loss after gain) are permitted. Biological considerations suggest that, on average, gene loss might be a more likely event than gene gain. Therefore different gain penalties were used and the resulting series of reconstructed gene sets for the last universal common ancestor (LUCA) of the extant life forms were analysed. The number of genes in the reconstructed LUCA gene sets grows as the gain penalty increases. However, qualitative examination of the LUCA versions reconstructed with different gain penalties indicates that, even

  2. Structure and stability of silicon nanoclusters passivated by hydrogen and oxygen: evolutionary algorithm and first- principles study

    NASA Astrophysics Data System (ADS)

    Baturin, V. S.; Lepeshkin, S. V.; Matsko, N. L.; Uspenskii, Yu A.

    2016-02-01

    We investigate the structural and thermodynamical properties of small silicon clusters. Using the graph theory applied to previously obtained structures of Si10H2m clusters we trace the connection between geometry and passivation degree. The existing data on these clusters and structures of Si10O4n clusters obtained here using evolutionary calculations allowed to analyze the features of Si10H2m clusters in hydrogen atmosphere and Si10O4n clusters in oxygen atmosphere. We have shown the basic differences between structures and thermodynamical properties of silicon clusters, passivated by hydrogen and silicon oxide clusters.

  3. A generic TG-186 shielded applicator for commissioning model-based dose calculation algorithms for high-dose-rate (192) Ir brachytherapy.

    PubMed

    Ma, Yunzhi; Vijande, Javier; Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Zourari, Kyveli; Papagiannis, Panagiotis; Rivard, Mark J; Siebert, Frank André; Sloboda, Ron S; Smith, Ryan; Chamberland, Marc J P; Thomson, Rowan M; Verhaegen, Frank; Beaulieu, Luc

    2017-07-19

    A joint working group was created by the American Association of Physicists in Medicine (AAPM), the European Society for Radiotherapy and Oncology (ESTRO), and the Australasian Brachytherapy Group (ABG) with the charge, among others, to develop a set of well-defined test case plans and perform model-based dose calculation algorithms (MBDCA) dose calculations and comparisons. Its main goal is to facilitate a smooth transition from the AAPM Task Group No. 43 (TG-43) dose calculation formalism, widely being used in clinical practice for brachytherapy, to the one proposed by Task Group No. 186 (TG-186) for MBDCAs. To do so, in this work a hypothetical, generic high-dose rate (HDR) (192) Ir shielded applicator has been designed and benchmarked. A generic HDR (192) Ir shielded applicator was designed based on three commercially available gynecological applicators as well as a virtual cubic water phantom that can be imported into any DICOM-RT compatible treatment planning system (TPS). The absorbed dose distribution around the applicator with the TG-186 (192) Ir source located at one dwell position at its center was computed using two commercial TPSs incorporating MBDCAs (Oncentra(®) Brachy with Advanced Collapsed-cone Engine, ACE(™) , and BrachyVision ACUROS(™) ) and state-of-the-art Monte Carlo (MC) codes, including ALGEBRA, BrachyDose, egs_brachy, Geant4, MCNP6, and Penelope2008. TPS-based volumetric dose distributions for the previously reported "source centered in water" and "source displaced" test cases, and the new "source centered in applicator" test case, were analyzed here using the MCNP6 dose distribution as a reference. Volumetric dose comparisons of TPS results against results for the other MC codes were also performed. Distributions of local and global dose difference ratios are reported. The local dose differences among MC codes are comparable to the statistical uncertainties of the reference datasets for the "source centered in water" and "source

  4. Handling packet dropouts and random delays for unstable delayed processes in NCS by optimal tuning of PIλDμ controllers with evolutionary algorithms.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-10-01

    The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI(λ)D(μ) controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications.

  5. Computational characterization of HPGe detectors usable for a wide variety of source geometries by using Monte Carlo simulation and a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.

    2017-06-01

    In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.

  6. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  7. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  8. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  9. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  10. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  11. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here

  12. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell

  13. Assessing the performance of linear and non-linear soil carbon dynamics models using the Multi-Objective Evolutionary Algorithm Borg-MOEA

    NASA Astrophysics Data System (ADS)

    Ramcharan, A. M.; Kemanian, A.; Richard, T.

    2013-12-01

    The largest terrestrial carbon pool is soil, storing more carbon than present in above ground biomass (Jobbagy and Jackson, 2000). In this context, soil organic carbon has gained attention as a managed sink for atmospheric CO2 emissions. The variety of models that describe soil carbon cycling reflects the relentless effort to characterize the complex nature of soil and the carbon within it. Previous works have laid out the range of mathematical approaches to soil carbon cycling but few have compared model structure performance in diverse agricultural scenarios. As interest in increasing the temporal and spatial scale of models grows, assessing the performance of different model structures is essential to drawing reasonable conclusions from model outputs. This research will address this challenge using the Evolutionary Algorithm Borg-MOEA to optimize the functionality of carbon models in a multi-objective approach to parameter estimation. Model structure performance will be assessed through analysis of multi-objective trade-offs using experimental data from twenty long-term carbon experiments across the globe. Preliminary results show a successful test of this proof of concept using a non-linear soil carbon model structure. Soil carbon dynamics were based on the amount of carbon inputs to the soil and the degree of organic matter saturation of the soil. The degree of organic matter saturation of the soil was correlated with the soil clay content. Six parameters of the non-linear soil organic carbon model were successfully optimized to steady-state conditions using Borg-MOEA and datasets from five agricultural locations in the United States. Given that more than 50% of models rely on linear soil carbon decomposition dynamics, a linear model structure was also optimized and compared to the non-linear case. Results indicate linear dynamics had a significantly lower optimization performance. Results show promise in using the Evolutionary Algorithm Borg-MOEA to assess

  14. Using a Novel Evolutionary Algorithm to More Effectively Apply Community-Driven EcoHealth Interventions in Big Data with Application to Chagas Disease

    NASA Astrophysics Data System (ADS)

    Rizzo, D. M.; Hanley, J.; Monroy, C.; Rodas, A.; Stevens, L.; Dorn, P.

    2016-12-01

    Chagas disease is a deadly, neglected tropical disease that is endemic to every country in Central and South America. The principal insect vector of Chagas disease in Central America is Triatoma dimidiata. EcoHealth interventions are an environmentally friendly alternative that use local materials to lower household infestation, reduce the risk of infestation, and improve the quality of life. Our collaborators from La Universidad de San Carlos de Guatemala along with Ministry of Health Officials reach out to communities with high infestation and teach the community EcoHealth interventions. The process of identifying which interventions have the potential to be most effective as well as the houses that are most at risk is both expensive and time consuming. In order to better identify the risk factors associated with household infestation of T. dimidiata, a number of studies have conducted socioeconomic and entomologic surveys that contain numerous potential risk factors consisting of both nominal and ordinal data. Univariate logistic regression is one of the more popular methods for determining which risk factors are most closely associated with infestation. However, this tool has limitations, especially with the large amount and type of "Big Data" associated with our study sites (e.g., 5 villages comprise of socioeconomic, demographic, and entomologic data). The infestation of a household with T. dimidiata is a complex problem that is most likely not univariate in nature and is likely to contain higher order epistatic relationships that cannot be discovered using univariate logistic regression. Add to this, the problems raised with using p-values in traditional statistics. Also, our T. dimidiata infestation dataset is too large to exhaustively search. Therefore, we use a novel evolutionary algorithm to efficiently search for higher order interactions in surveys associated with households infested with T. dimidiata. In this study, we use our novel evolutionary

  15. Optimizing the distribution of resources between enzymes of carbon metabolism can dramatically increase photosynthetic rate: a numerical simulation using an evolutionary algorithm.

    PubMed

    Zhu, Xin-Guang; de Sturler, Eric; Long, Stephen P

    2007-10-01

    The distribution of resources between enzymes of photosynthetic carbon metabolism might be assumed to have been optimized by natural selection. However, natural selection for survival and fecundity does not necessarily select for maximal photosynthetic productivity. Further, the concentration of a key substrate, atmospheric CO(2), has changed more over the past 100 years than the past 25 million years, with the likelihood that natural selection has had inadequate time to reoptimize resource partitioning for this change. Could photosynthetic rate be increased by altered partitioning of resources among the enzymes of carbon metabolism? This question is addressed using an "evolutionary" algorithm to progressively search for multiple alterations in partitioning that increase photosynthetic rate. To do this, we extended existing metabolic models of C(3) photosynthesis by including the photorespiratory pathway (PCOP) and metabolism to starch and sucrose to develop a complete dynamic model of photosynthetic carbon metabolism. The model consists of linked differential equations, each representing the change of concentration of one metabolite. Initial concentrations of metabolites and maximal activities of enzymes were extracted from the literature. The dynamics of CO(2) fixation and metabolite concentrations were realistically simulated by numerical integration, such that the model could mimic well-established physiological phenomena. For example, a realistic steady-state rate of CO(2) uptake was attained and then reattained after perturbing O(2) concentration. Using an evolutionary algorithm, partitioning of a fixed total amount of protein-nitrogen between enzymes was allowed to vary. The individual with the higher light-saturated photosynthetic rate was selected and used to seed the next generation. After 1,500 generations, photosynthesis was increased substantially. This suggests that the "typical" partitioning in C(3) leaves might be suboptimal for maximizing the light

  16. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    PubMed Central

    Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This

  17. a Gis-Based Model for Post-Earthquake Personalized Route Planning Using the Integration of Evolutionary Algorithm and Owa

    NASA Astrophysics Data System (ADS)

    Moradi, M.; Delavar, M. R.; Moradi, A.

    2015-12-01

    Being one of the natural disasters, earthquake can seriously damage buildings, urban facilities and cause road blockage. Post-earthquake route planning is problem that has been addressed in frequent researches. The main aim of this research is to present a route planning model for after earthquake. It is assumed in this research that no damage data is available. The presented model tries to find the optimum route based on a number of contributing factors which mainly indicate the length, width and safety of the road. The safety of the road is represented by a number of criteria such as distance to faults, percentage of non-standard buildings and percentage of high buildings around the route. An integration of genetic algorithm and ordered weighted averaging operator is employed in the model. The former searches the problem space among all alternatives, while the latter aggregates the scores of road segments to compute an overall score for each alternative. Ordered weighted averaging operator enables the users of the system to evaluate the alternative routes based on their decision strategy. Based on the proposed model, an optimistic user tries to find the shortest path between the two points, whereas a pessimistic user tends to pay more attention to safety parameters even if it enforces a longer route. The results depicts that decision strategy can considerably alter the optimum route. Moreover, post-earthquake route planning is a function of not only the length of the route but also the probability of the road blockage.

  18. Evolutionary Metric-Learning-Based Recognition Algorithm for Online Isolated Persian/Arabic Characters, Reconstructed Using Inertial Pen Signals.

    PubMed

    Sepahvand, Majid; Abdali-Mohammadi, Fardin; Mardukhi, Farhad

    2016-12-13

    The development of sensors with the microelectromechanical systems technology expedites the emergence of new tools for human-computer interaction, such as inertial pens. These pens, which are used as writing tools, do not depend on a specific embedded hardware, and thus, they are inexpensive. Most of the available inertial pen character recognition approaches use the low-level features of inertial signals. This paper introduces a Persian/Arabic handwriting character recognition system for inertial-sensor-equipped pens. First, the motion trajectory of the inertial pen is reconstructed to estimate the position signals by using the theory of inertial navigation systems. The position signals are then used to extract high-level geometrical features. A new metric learning technique is then adopted to enhance the accuracy of character classification. To this end, a characteristic function is calculated for each character using a genetic programming algorithm. These functions form a metric kernel classifying all the characters. The experimental results show that the performance of the proposed method is superior to that of one of the state-of-the-art works in terms of recognizing Persian/Arabic handwriting characters.

  19. Ligand Docking to Intermediate and Close-To-Bound Conformers Generated by an Elastic Network Model Based Algorithm for Highly Flexible Proteins

    PubMed Central

    Kurkcuoglu, Zeynep; Doruker, Pemra

    2016-01-01

    Incorporating receptor flexibility in small ligand-protein docking still poses a challenge for proteins undergoing large conformational changes. In the absence of bound structures, sampling conformers that are accessible by apo state may facilitate docking and drug design studies. For this aim, we developed an unbiased conformational search algorithm, by integrating global modes from elastic network model, clustering and energy minimization with implicit solvation. Our dataset consists of five diverse proteins with apo to complex RMSDs 4.7–15 Å. Applying this iterative algorithm on apo structures, conformers close to the bound-state (RMSD 1.4–3.8 Å), as well as the intermediate states were generated. Dockings to a sequence of conformers consisting of a closed structure and its “parents” up to the apo were performed to compare binding poses on different states of the receptor. For two periplasmic binding proteins and biotin carboxylase that exhibit hinge-type closure of two dynamics domains, the best pose was obtained for the conformer closest to the bound structure (ligand RMSDs 1.5–2 Å). In contrast, the best pose for adenylate kinase corresponded to an intermediate state with partially closed LID domain and open NMP domain, in line with recent studies (ligand RMSD 2.9 Å). The docking of a helical peptide to calmodulin was the most challenging case due to the complexity of its 15 Å transition, for which a two-stage procedure was necessary. The technique was first applied on the extended calmodulin to generate intermediate conformers; then peptide docking and a second generation stage on the complex were performed, which in turn yielded a final peptide RMSD of 2.9 Å. Our algorithm is effective in producing conformational states based on the apo state. This study underlines the importance of such intermediate states for ligand docking to proteins undergoing large transitions. PMID:27348230

  20. A novel JEAnS analysis of the Fornax dwarf using evolutionary algorithms: mass follows light with signs of an off-centre merger

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris

    2017-09-01

    Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (i) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (ii) an optimization of the smoothing parameters and (iii) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.

  1. Optimization of bioenergy crop selection and placement based on a stream health indicator using an evolutionary algorithm.

    PubMed

    Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen

    2016-10-01

    The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest.

  2. Evolutionary tree reconstruction

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Kanefsky, Bob

    1990-01-01

    It is described how Minimum Description Length (MDL) can be applied to the problem of DNA and protein evolutionary tree reconstruction. If there is a set of mutations that transform a common ancestor into a set of the known sequences, and this description is shorter than the information to encode the known sequences directly, then strong evidence for an evolutionary relationship has been found. A heuristic algorithm is described that searches for the simplest tree (smallest MDL) that finds close to optimal trees on the test data. Various ways of extending the MDL theory to more complex evolutionary relationships are discussed.

  3. Integrating remotely sensed leaf area index and leaf nitrogen accumulation with RiceGrow model based on particle swarm optimization algorithm for rice grain yield assessment

    NASA Astrophysics Data System (ADS)

    Wang, Hang; Zhu, Yan; Li, Wenlong; Cao, Weixing; Tian, Yongchao

    2014-01-01

    A regional rice (Oryza sativa) grain yield prediction technique was proposed by integration of ground-based and spaceborne remote sensing (RS) data with the rice growth model (RiceGrow) through a new particle swarm optimization (PSO) algorithm. Based on an initialization/parameterization strategy (calibration), two agronomic indicators, leaf area index (LAI) and leaf nitrogen accumulation (LNA) remotely sensed by field spectra and satellite images, were combined to serve as an external assimilation parameter and integrated with the RiceGrow model for inversion of three model management parameters, including sowing date, sowing rate, and nitrogen rate. Rice grain yield was then predicted by inputting these optimized parameters into the reinitialized model. PSO was used for the parameterization and regionalization of the integrated model and compared with the shuffled complex evolution-University of Arizona (SCE-UA) optimization algorithm. The test results showed that LAI together with LNA as the integrated parameter performed better than each alone for crop model parameter initialization. PSO also performed better than SCE-UA in terms of running efficiency and assimilation results, indicating that PSO is a reliable optimization method for assimilating RS information and the crop growth model. The integrated model also had improved precision for predicting rice grain yield.

  4. Evaluation of water resources system vulnerability based on co-operative co-evolutionary genetic algorithm and projection pursuit model under the DPSIR framework

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Su, X. H.; Wang, M. H.; Li, Z. Y.; Li, E. K.; Xu, X.

    2017-08-01

    Water resources vulnerability control management is essential because it is related to the benign evolution of socio-economic, environmental and water resources system. Research on water resources system vulnerability is helpful to realization of water resources sustainable utilization. In this study, the DPSIR framework of driving forces-pressure–state–impact-response was adopted to construct the evaluation index system of water resources system vulnerability. Then the co-evolutionary genetic algorithm and projection pursuit were used to establish evaluation model of water resources system vulnerability. Tengzhou City in Shandong Province was selected as a study area. The system vulnerability was analyzed in terms of driving forces, pressure, state, impact and response on the basis of the projection value calculated by the model. The results show that the five components all belong to vulnerability Grade II, the vulnerability degree of impact and state were higher than other components due to the fierce imbalance in supply-demand and the unsatisfied condition of water resources utilization. It is indicated that the influence of high speed socio-economic development and the overuse of the pesticides have already disturbed the benign development of water environment to some extents. While the indexes in response represented lower vulnerability degree than the other components. The results of the evaluation model are coincident with the status of water resources system in the study area, which indicates that the model is feasible and effective.

  5. Evolving model-free scattering matrix via evolutionary algorithm: {sup 16}O-{sup 16}O elastic scattering at 350 MeV

    SciTech Connect

    Korda, V.Yu.; Molev, A.S.; Korda, L.P.

    2005-07-01

    We present a new procedure that enables us to extract a scattering matrix S(l) as a complex function of angular momentum directly from the scattering data without any a priori model assumptions implied. The key ingredient of the procedure is the evolutionary algorithm with diffused mutation that evolves the population of the scattering matrices by means of their smooth deformations from the primary arbitrary analytical S(l) shapes to the final ones, giving high-quality fits to the data. Because of the automatic monitoring of the scattering-matrix derivatives, the final S(l) shapes are monotonic and do not have any distortions. For the {sup 16}O-{sup 16}O elastic-scattering data at 350 MeV, we show the independence of the final results of the primary S(l) shapes. Contrary to other approaches, our procedure provides an excellent fit by the S(l) shapes that support the 'rainbow' interpretation of the data under analysis.

  6. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    NASA Astrophysics Data System (ADS)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  7. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA

  8. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    PubMed

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  9. High resolution study of the ν2 and ν5 rovibrational fundamental bands of thionyl chloride: Interplay of an evolutionary algorithm and a line-by-line analysis

    NASA Astrophysics Data System (ADS)

    Roucou, Anthony; Dhont, Guillaume; Cuisset, Arnaud; Martin-Drumel, Marie-Aline; Thorwirth, Sven; Fontanari, Daniele; Meerts, W. Leo

    2017-08-01

    The ν2 and ν5 fundamental bands of thionyl chloride (SOCl2) were measured in the 420 cm-1-550 cm-1 region using the FT-far-IR spectrometer exploiting synchrotron radiation on the AILES beamline at SOLEIL. A straightforward line-by-line analysis is complicated by the high congestion of the spectrum due to both the high density of SOCl2 rovibrational bands and the presence of the ν2 fundamental band of sulfur dioxide produced by hydrolysis of SOCl2 with residual water. To overcome this difficulty, our assignment procedure for the main isotopologues 32S16O35Cl2 and 32S16O35Cl37Cl alternates between a direct fit of the spectrum, via a global optimization technique, and a traditional line-by-line analysis. The global optimization, based on an evolutionary algorithm, produces rotational constants and band centers that serve as useful starting values for the subsequent spectroscopic analysis. This work helped to identify the pure rotational submillimeter spectrum of 32S16O35Cl2 in the v2=1 and v5=1 vibrational states of Martin-Drumel et al. [J. Chem. Phys. 144, 084305 (2016)]. As a by-product, the rotational transitions of the v4=1 far-IR inactive state were identified in the submillimeter spectrum. A global fit gathering all the microwave, submillimeter, and far-IR data of thionyl chloride has been performed, showing that no major perturbation of rovibrational energy levels occurs for the main isotopologue of the molecule.

  10. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  11. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  12. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  13. Model-based Hyperspectral Exploitation Algorithm Development

    DTIC Science & Technology

    2007-09-30

    near- blackbody pixels, and an iterative constrained optimization using generalized reduced gradients ( GRG ). Sample results are shown in Figure 5...derive the spectral in-water optical parameters from remote observations involved a non-linear optimization that required observations of several...and emissivity retrieval from long wave infrared airborne hyperspectral imagery. The optimized land surface temperature and emissivity retrieval

  14. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    SciTech Connect

    Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  15. Effect of Radiation Dose Reduction and Reconstruction Algorithm on Image Noise, Contrast, Resolution, and Detectability of Subtle Hypoattenuating Liver Lesions at Multidetector CT: Filtered Back Projection versus a Commercial Model-based Iterative Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Marin, Daniele; Roy Choudhury, Kingshuk; Patel, Bhavik; Samei, Ehsan

    2017-02-07

    Purpose To determine the effect of radiation dose and iterative reconstruction (IR) on noise, contrast, resolution, and observer-based detectability of subtle hypoattenuating liver lesions and to estimate the dose reduction potential of the IR algorithm in question. Materials and Methods This prospective, single-center, HIPAA-compliant study was approved by the institutional review board. A dual-source computed tomography (CT) system was used to reconstruct CT projection data from 21 patients into six radiation dose levels (12.5%, 25%, 37.5%, 50%, 75%, and 100%) on the basis of two CT acquisitions. A series of virtual liver lesions (five per patient, 105 total, lesion-to-liver prereconstruction contrast of -15 HU, 12-mm diameter) were inserted into the raw CT projection data and images were reconstructed with filtered back projection (FBP) (B31f kernel) and sinogram-affirmed IR (SAFIRE) (I31f-5 kernel). Image noise (pixel standard deviation), lesion contrast (after reconstruction), lesion boundary sharpness (average normalized gradient at lesion boundary), and contrast-to-noise ratio (CNR) were compared. Next, a two-alternative forced choice perception experiment was performed (16 readers [six radiologists, 10 medical physicists]). A linear mixed-effects statistical model was used to compare detection accuracy between FBP and SAFIRE and to estimate the radiation dose reduction potential of SAFIRE. Results Compared with FBP, SAFIRE reduced noise by a mean of 53% ± 5, lesion contrast by 12% ± 4, and lesion sharpness by 13% ± 10 but increased CNR by 89% ± 19. Detection accuracy was 2% higher on average with SAFIRE than with FBP (P = .03), which translated into an estimated radiation dose reduction potential (±95% confidence interval) of 16% ± 13. Conclusion SAFIRE increases detectability at a given radiation dose (approximately 2% increase in detection accuracy) and allows for imaging at reduced radiation dose (16% ± 13), while maintaining low

  16. Incorporation of an evolutionary algorithm to estimate transfer-functions for a parameter regionalization scheme of a rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2016-04-01

    This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect

  17. GEIGER: investigating evolutionary radiations.

    PubMed

    Harmon, Luke J; Weir, Jason T; Brock, Chad D; Glor, Richard E; Challenger, Wendell

    2008-01-01

    GEIGER is a new software package, written in the R language, to describe evolutionary radiations. GEIGER can carry out simulations, parameter estimation and statistical hypothesis testing. Additionally, GEIGER's simulation algorithms can be used to analyze the statistical power of comparative approaches. This open source software is written entirely in the R language and is freely available through the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org/.

  18. Improving pulmonary vessel image quality with a full model-based iterative reconstruction algorithm in 80kVp low-dose chest CT for pediatric patients aged 0-6 years.

    PubMed

    Sun, Jihang; Zhang, Qifeng; Hu, Di; Duan, Xiaomin; Peng, Yun

    2015-06-01

    Full model-based iterative reconstruction (MBIR) algorithm decreasing image noise and improving spatial resolution significantly, combined with low voltage scan may improve image and vessels quality. To evaluate the image quality improvement of pulmonary vessels using a full MBIR in low-dose chest computed tomography (CT) for children. This study was institutional review board approved. Forty-one children (age range, 28 days-6 years, mean age, 2.0 years) who underwent 80 kVp low-dose CT scans were included. Age-dependent noise index (NI) for a 5-mm slice thickness image was used for the acquisition: NI = 11 for 0-12 months old, NI = 13 for 1-2 years old, and NI = 15 for 3-6 years old. Images were retrospectively reconstructed into thin slice thickness of 0.625 mm using the MBIR and a conventional filtered back projection (FBP) algorithm. Two radiologists independently evaluated images subjectively focusing on the ability to display small arteries and diagnosis confidence on a 5-point scale with 3 being clinically acceptable. CT value and image noise in the descending aorta, muscle and fat were measured and statistically compared between the two reconstruction groups. The ability to display small vessels was significantly improved with the MBIR reconstruction. The subjective scores of displaying small vessels were 5.0 and 3.7 with MBIR and FBP, respectively, while the respective diagnosis confidence scores were 5.0 and 3.8. Quantitative image noise for the 0.625 mm slice thickness images in the descending aorta was 15.8 ± 3.8 HU in MBIR group, 57.3% lower than the 37.0 ± 7.3 HU in FBP group. The signal-to-noise ratio and contrast-to-noise ratio for the descending aorta were 28.3 ± 7.9 and 24.05 ± 7.5 in MBIR group, and 12.1 ± 3.7 and 10.6 ± 3.5 in FBP group, respectively. These values were improved by 133.9% and 132.1%, respectively, with MBIR reconstruction compared to FBP reconstruction. Compared to the conventional FBP reconstruction, the image quality and

  19. Evolutionary thinking

    PubMed Central

    Hunt, Tam

    2014-01-01

    Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

  20. Toward an evolutionary-predictive foundation for creativity : Commentary on "Human creativity, evolutionary algorithms, and predictive representations: The mechanics of thought trials" by Arne Dietrich and Hilde Haider, 2014 (Accepted pending minor revisions for publication in Psychonomic Bulletin & Review).

    PubMed

    Gabora, Liane; Kauffman, Stuart

    2016-04-01

    Dietrich and Haider (Psychonomic Bulletin & Review, 21 (5), 897-915, 2014) justify their integrative framework for creativity founded on evolutionary theory and prediction research on the grounds that "theories and approaches guiding empirical research on creativity have not been supported by the neuroimaging evidence." Although this justification is controversial, the general direction holds promise. This commentary clarifies points of disagreement and unresolved issues, and addresses mis-applications of evolutionary theory that lead the authors to adopt a Darwinian (versus Lamarckian) approach. To say that creativity is Darwinian is not to say that it consists of variation plus selection - in the everyday sense of the term - as the authors imply; it is to say that evolution is occurring because selection is affecting the distribution of randomly generated heritable variation across generations. In creative thought the distribution of variants is not key, i.e., one is not inclined toward idea A because 60 % of one's candidate ideas are variants of A while only 40 % are variants of B; one is inclined toward whichever seems best. The authors concede that creative variation is partly directed; however, the greater the extent to which variants are generated non-randomly, the greater the extent to which the distribution of variants can reflect not selection but the initial generation bias. Since each thought in a creative process can alter the selective criteria against which the next is evaluated, there is no demarcation into generations as assumed in a Darwinian model. We address the authors' claim that reduced variability and individuality are more characteristic of Lamarckism than Darwinian evolution, and note that a Lamarckian approach to creativity has addressed the challenge of modeling the emergent features associated with insight.

  1. Evolutionary rescue beyond the models

    PubMed Central

    Gomulkiewicz, Richard; Shaw, Ruth G.

    2013-01-01

    Laboratory model systems and mathematical models have shed considerable light on the fundamental properties and processes of evolutionary rescue. But it remains to determine the extent to which these model-based findings can help biologists predict when evolution will fail or succeed in rescuing natural populations that are facing novel conditions that threaten their persistence. In this article, we present a prospectus for transferring our basic understanding of evolutionary rescue to wild and other non-laboratory populations. Current experimental and theoretical results emphasize how the interplay between inheritance processes and absolute fitness in changed environments drive population dynamics and determine prospects of extinction. We discuss the challenge of inferring these elements of the evolutionary rescue process in field and natural settings. Addressing this challenge will contribute to a more comprehensive understanding of population persistence that combines processes of evolutionary rescue with developmental and ecological mechanisms. PMID:23209173

  2. Evolutionary awareness.

    PubMed

    Gorelik, Gregory; Shackelford, Todd K

    2014-08-27

    In this article, we advance the concept of "evolutionary awareness," a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities-which we refer to as "intergenerational extended phenotypes"-by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.

  3. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  4. Evolutionary institutionalism.

    PubMed

    Fürstenberg, Dr Kai

    Institutions are hard to define and hard to study. Long prominent in political science have been two theories: Rational Choice Institutionalism (RCI) and Historical Institutionalism (HI). Arising from the life sciences is now a third: Evolutionary Institutionalism (EI). Comparative strengths and weaknesses of these three theories warrant review, and the value-to-be-added by expanding the third beyond Darwinian evolutionary theory deserves consideration. Should evolutionary institutionalism expand to accommodate new understanding in ecology, such as might apply to the emergence of stability, and in genetics, such as might apply to political behavior? Core arguments are reviewed for each theory with more detailed exposition of the third, EI. Particular attention is paid to EI's gene-institution analogy; to variation, selection, and retention of institutional traits; to endogeneity and exogeneity; to agency and structure; and to ecosystem effects, institutional stability, and empirical limitations in behavioral genetics. RCI, HI, and EI are distinct but complementary. Institutional change, while amenable to rational-choice analysis and, retrospectively, to criticaljuncture and path-dependency analysis, is also, and importantly, ecological. Stability, like change, is an emergent property of institutions, which tend to stabilize after change in a manner analogous to allopatric speciation. EI is more than metaphorically biological in that institutional behaviors are driven by human behaviors whose evolution long preceded the appearance of institutions themselves.

  5. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    DTIC Science & Technology

    2013-02-01

    Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are

  6. Model-Based Improvement

    DTIC Science & Technology

    2006-10-01

    2006 4. TITLE AND SUBTITLE Model-Based Improvement 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Carnegie Mellon University ,Software Engineering...Institute (SEI),Pittsburgh,PA,15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S

  7. Evolutionary software for autonomous path planning

    SciTech Connect

    Couture, S; Hage, M

    1999-02-10

    This research project demonstrated the effectiveness of using evolutionary software techniques in the development of path-planning algorithms and control programs for mobile vehicles in radioactive environments. The goal was to take maximum advantage of the programmer's intelligence by tasking the programmer with encoding the measures of success for a path-planning algorithm, rather than developing the path-planning algorithms themselves. Evolutionary software development techniques could then be used to develop algorithms most suitable to the particular environments of interest. The measures of path-planning success were encoded in the form of a fitness function for an evolutionary software development engine. The task for the evolutionary software development engine was to evaluate the performance of individual algorithms, select the best performers for the population based on the fitness function, and breed them to evolve the next generation of algorithms. The process continued for a set number of generations or until the algorithm converged to an optimal solution. The task environment was the navigation of a rover from an initial location to a goal, then to a processing point, in an environment containing physical and radioactive obstacles. Genetic algorithms were developed for a variety of environmental configurations. Algorithms were simple and non-robust strings of behaviors, but they could be evolved to be nearly optimal for a given environment. In addition, a genetic program was evolved in the form of a control algorithm that operates at every motion of the robot. Programs were more complex than algorithms and less optimal in a given environment. However, after training in a variety of different environments, they were more robust and could perform acceptably in environments they were not trained in. This paper describes the evolutionary software development engine and the performance of algorithms and programs evolved by it for the chosen task.

  8. Parameterization of NDDO wavefunctions using genetic algorithms. An evolutionary approach to parameterizing potential energy surfaces and direct dynamics calculations for organic reactions

    NASA Astrophysics Data System (ADS)

    Rossi, Ivan; Truhlar, Donald G.

    1995-02-01

    We used a genetic algorithm to fit a set of energy differences obtained by neglect-of-diatomic-differential-overlap (NDDO) molecular orbital theory to reference ab initio data, yielding a set of specific reaction parameters (SRP) for the reaction Cl + CH 4. Only a small number ab initio points along a distinguished-coordinate path were used as input, but the surface is well fit both on and off the reaction path over a range of energies three times wider than the input range. The resulting NDDO-SRP potential energy surface is almost four orders of magnitude less expensive to evaluate than the reference ab initio surface and is well suited for direct dynamics calculations.

  9. Evolutionary Dynamics of Biological Games

    NASA Astrophysics Data System (ADS)

    Nowak, Martin A.; Sigmund, Karl

    2004-02-01

    Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.

  10. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  11. An Evolutionary Game Theoretic Approach for Conjunctive Surface and Ground Water Allocation

    NASA Astrophysics Data System (ADS)

    Parsapour Moghaddam, P.; Abed Elmdoust, A.; Kerachian, R.

    2011-12-01

    In this paper, a non-cooperative game theoretic methodology is developed for determining evolutionary stable policies for surface and groundwater allocation to stakeholders with conflicting objectives. In the proposed methodology, the information of water balance, hydrogeologic characteristics of the aquifer and some other crucial data are used for modeling groundwater flow using the MODFLOW and MT3D groundwater quantity and quality simulation models. An optimization model based on genetic algorithm is also developed that yields the evolutionary stable water allocation strategies for the stakeholders considering different non-cooperative common pool resources (CPR) management institutions within short and long term planning horizons. In the methodology, some basic characteristics of beneficiaries are also considered to determine how different planning variables are affected by the different rationales and exploitation strategies of stakeholders. To illustrate the practical utility of the proposed methodology, it is applied to the Rafsanjan Basin in Iran.

  12. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  13. Evolutionary novelties.

    PubMed

    Wagner, Günter P; Lynch, Vincent J

    2010-01-26

    How novel traits arise in organisms has long been a major problem in biology. Indeed, the sharpest critiques of Darwin's theory of evolution by natural selection often centered on explaining how novel body parts arose. In his response to The Origin of Species, St. George J. Mivart challenged Darwin to explain the origin of evolutionary novelties such as the mammary gland, asking if it was "conceivable that the young of any animal was ever saved from destruction by accidentally sucking a drop of scarcely nutritious fluid from an accidentally hypertrophied cutaneous gland of its mother?" It is only now that modern molecular and genomic tools are being brought to bear on this question that we are finally in a position to answer Mivart's challenge and explain one of the most fundamental questions of biology: how does novelty arise in evolution?

  14. (Box-filling-model)-based ONU schedule algorithm and bandwidth-requirement-based ONU transfer mechanism for multi-subsystem-based VPONs' management in metro-access optical network

    NASA Astrophysics Data System (ADS)

    Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Hua, Jian

    2017-07-01

    ONU schedule algorithm and ONU transfer mechanism for multi-subsystem-based VPONs' management is proposed in this paper. To avoid frequent wavelength switch and realize high system stability, ONU schedule algorithm is presented for wavelength allocation by introducing box-filling model. At the same time, judgement mechanism is designed to filter wavelength-increased request caused by slight bandwidth fluctuation of VPON. To share remained bandwidth among VPONs, ONU transfer mechanism is put forward according to flexible wavelength routing. To manage wavelength resource of entire network and wavelength requirement from VPONs, information-managed matrix model is constructed. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.

  15. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  16. Evolutionary Design in Biology

    NASA Astrophysics Data System (ADS)

    Wiese, Kay C.

    Much progress has been achieved in recent years in molecular biology and genetics. The sheer volume of data in the form of biological sequences has been enormous and efficient methods for dealing with these huge amounts of data are needed. In addition, the data alone does not provide information on the workings of biological systems; hence much research effort has focused on designing mathematical and computational models to address problems from molecular biology. Often, the terms bioinformatics and computational biology are used to refer to the research fields concerning themselves with designing solutions to molecular problems in biology. However, there is a slight distinction between bioinformatics and computational biology: the former is concerned with managing the enormous amounts of biological data and extracting information from it, while the latter is more concerned with the design and development of new algorithms to address problems such as protein or RNA folding. However, the boundary is blurry, and there is no consistent usage of the terms. We will use the term bioinformatics to encompass both fields. To cover all areas of research in bioinformatics is beyond the scope of this section and we refer the interested reader to [2] for a general introduction. A large part of what bioinformatics is concerned about is evolution and function of biological systems on a molecular level. Evolutionary computation and evolutionary design are concerned with developing computational systems that "mimic" certain aspects of natural evolution (mutation, crossover, selection, fitness). Much of the inner workings of natural evolutionary systems have been copied, sometimes in modified format into evolutionary computation systems. Artificial neural networks mimic the functioning of simple brain cell clusters. Fuzzy systems are concerned with the "fuzzyness" in decision making, similar to a human expert. These three computational paradigms fall into the category of

  17. Evolutionary engineering for industrial microbiology.

    PubMed

    Vanee, Niti; Fisher, Adam B; Fong, Stephen S

    2012-01-01

    Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.

  18. Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello

    2004-01-01

    This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.

  19. Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello

    2004-01-01

    This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.

  20. Toward a unifying framework for evolutionary processes

    PubMed Central

    Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M.; Trubenová, Barbora

    2015-01-01

    The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. PMID:26215686

  1. Toward a unifying framework for evolutionary processes.

    PubMed

    Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M; Trubenová, Barbora

    2015-10-21

    The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. An inquiry into evolutionary inquiry

    NASA Astrophysics Data System (ADS)

    Donovan, Samuel S.

    2005-11-01

    While evolution education has received a great deal of attention within the science education research community it still poses difficult teaching and learning challenges. Understanding evolutionary biology has been given high priority in national science education policy because of its role in coordinating our understanding of the life sciences, its importance in our intellectual history, its role in the perception of humans' position in nature, and its impact on our current medical, agricultural, and conservation practices. The rhetoric used in evolution education policy statements emphasizes familiarity with the nature of scientific inquiry as an important learning outcome associated with understanding evolution but provide little guidance with respect to how one might achieve this goal. This dissertation project explores the nature of evolutionary inquiry and how understanding the details of disciplinary reasoning can inform evolution education. The first analysis involves recasting the existing evolution education research literature to assess educational outcomes related to students ability to reason about data using evolutionary biology methods and models. This is followed in the next chapter by a detailed historical and philosophical characterization of evolutionary biology with the goal of providing a richer context for considering what exactly it is we want students to know about evolution as a discipline. Chapter 4 describes the development and implementation of a high school evolution curriculum that engages students with many aspects of model based reasoning. The final component of this reframing of evolution education involves an empirical study characterizing students' understanding of evolutionary biology as a modeling enterprise. Each chapter addresses a different aspect of evolution education and explores the implications of foregrounding disciplinary reasoning as an educational outcome. The analyses are coordinated with one another in the sense

  3. Evolutionary dynamics of diploid populations

    NASA Astrophysics Data System (ADS)

    Desimone, Ralph; Newman, Timothy

    2003-10-01

    There has been much recent interest in constructing computer models of evolutionary dynamics. Typically these models focus on asexual population dynamics, which are appropriate for haploid organsims such as bacteria. Using a recently developed ``genome template'' model, we extend the algorithm to a sexual population of diploid organisms. We will present some early results showing the temporal evolution of mean fitness and genetic variation, and compare this to typical results from haploid populations.

  4. Model-based reconfiguration: Diagnosis and recovery

    NASA Technical Reports Server (NTRS)

    Crow, Judy; Rushby, John

    1994-01-01

    We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.

  5. Model-based target and background characterization

    NASA Astrophysics Data System (ADS)

    Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert

    2000-07-01

    Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.

  6. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  7. Hierarchical model-based interferometric synthetic aperture radar image registration

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2014-01-01

    With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorith